RBD块设备在Ceph分布式存储中的具体应用
确保集群状态正常(具体配置过程略):
可参照 https://blog.51cto.com/jdonghong/244175 上半部分配置。
CEPH环境配置
开始部署RBD or RADOS Block Device
创新互联公司主营西乡塘网站建设的网络公司,主营网站建设方案,成都app软件开发公司,西乡塘h5成都微信小程序搭建,西乡塘网站营销推广欢迎西乡塘等地区企业咨询
客户端安装ceph(本案例客户端为192.168.27.210,192.168.26.112)
ceph-deploy install bddb.com
推送配置文件到客户端。
[root@master idc-cluster]# ceph-deploy admin bddb.com
客户端创建rdb块镜像设备
[root@BDDB ceph]# rbd create idc --size 4096 --image-feature layering
[root@BDDB ceph]# rbd ls
idc
[root@BDDB ceph]#
客户端映射创建的块镜像设备
[root@BDDB ceph]# rbd map idc
/dev/rbd0
[root@BDDB ceph]#
格式化块设备创建文件系统(客户端节点上)。
[root@BDDB ceph]# mkfs.xfs /dev/rbd0
挂载目录:[root@BDDB ceph]# mount /dev/rbd/rbd/idc /ceph/rbd
查看状态并存放或创建测试文件:
[root@BDDB ceph]# ls /ceph
rbd rbd2
[root@BDDB ceph]# ls /ceph -l
total 0
drwxr-xr-x 2 ceph ceph 6 Aug 23 11:17 rbd
drwxr-xr-x 2 ceph ceph 6 Aug 26 10:01 rbd2
[root@BDDB ceph]# mount /dev/rbd
rbd/ rbd0
[root@BDDB ceph]# mount /dev/r
random raw/ rbd/ rbd0 rtc rtc0
[root@BDDB ceph]# mount /dev/rbd
rbd/ rbd0
[root@BDDB ceph]# mount /dev/rbd/rbd/idc /ceph/rbd
[root@BDDB ceph]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 60G 32G 29G 53% /
/dev/sda5 60G 3.5G 57G 6% /data
/dev/sda1 497M 148M 350M 30% /boot
tmpfs 184M 0 184M 0% /run/user/0
/dev/rbd0 3.9G 16M 3.6G 1% /ceph/rbd
[root@BDDB ceph]# ls
ceph.client.admin.keyring ceph.conf rbdmap tmp3nza8m tmpGs8qYv tmpNTb5P9 tmpOJovru
[root@BDDB ceph]# cd /ceph/rbd
[root@BDDB rbd]# ls
123.txt 1.txt lost+found my1.txt my2.txt my3.txt
[root@BDDB rbd]# rbd map isc
/dev/rbd1
[root@BDDB rbd]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs 5f75d5de-3e02-43e3-a36d-57bc39e9a5ae /boot
├─sda2 xfs bab8e8ae-cb5e-4299-b879-54960f1a24b9 /
├─sda3 swap ce44af87-b8a7-40d2-8504-1c8fae81f613 [SWAP]
├─sda4
└─sda5 xfs cb8c9f72-8154-4ae9-aa57-54703dedfd06 /data
sr0
rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe /ceph/rbd
rbd1 xfs 42989fcc-0746-4848-a957-0c01704865b8
[root@BDDB rbd]# mount /dev/rbd
rbd/ rbd0 rbd1
[root@BDDB rbd]# mount /dev/rbd/rbd/isc
123.txt 1.txt lost+found/ my1.txt my2.txt my3.txt
[root@BDDB rbd]# mount /dev/rbd/rbd/isc /ceph/rbd
rbd/ rbd2/
[root@BDDB rbd]# mount /dev/rbd/rbd/isc /ceph/rbd2/
[root@BDDB rbd]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 60G 32G 29G 53% /
/dev/sda5 60G 3.5G 57G 6% /data
/dev/sda1 497M 148M 350M 30% /boot
tmpfs 184M 0 184M 0% /run/user/0
/dev/rbd0 3.9G 16M 3.6G 1% /ceph/rbd
/dev/rbd1 10G 33M 10G 1% /ceph/rbd2
[root@BDDB rbd]# cd /ceph/rbd2/
[root@BDDB rbd2]# ls
[root@BDDB rbd2]# touch {1..3}.txt
[root@BDDB rbd2]# ls
1.txt 2.txt 3.txt
[root@BDDB rbd2]# echo my test >1.txt
[root@BDDB rbd2]# ls
1.txt 2.txt 3.txt
[root@BDDB rbd2]# cat 1.txt
my test
[root@BDDB rbd2]#
再在另一个客户端映射块设备并挂载,观察RBD效果:
首先在另一个客户端观察设备镜像状态:
[root@master rbd]# rbd ls
idc
isc
[root@master rbd]# rbd info idc isc
rbd: too many arguments
[root@master rbd]# rbd info idc
rbd image 'idc':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.85456b8b4567
format: 2
features: layering
flags:
[root@master rbd]# rbd info isc
rbd image 'isc':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.148a56b8b4567
format: 2
features: layering
flags:
[root@master rbd]#
映射rbd块镜像设备到本机(map)
[root@master rbd]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
fd0
sda
├─sda1 xfs fb314ba6-93e0-4d7d-bb80-9c6e5a92fd61 /boot
└─sda2 LVM2_member 3Zne0f-m5MZ-OP67-TQ2a-Lnzr-SGME-UNJnMK
├─centos-root xfs d009c83b-2ca3-4642-a956-9f967fa249e6 /
├─centos-swap swap cf124d61-2df9-44a4-bba2-cee94056f547 [SWAP]
└─centos-data xfs 538b2348-c8cd-4755-9ba9-2f3b10fb8f33 /data
sr0
rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe /ceph/rbd
[root@master rbd]# rbd map isc
/dev/rbd1
[root@master rbd]#
查看map后的映射效果:
[root@master rbd]# lsblk -f
挂载映射设备并观察效果:(注意这里无需再格式,因为在另一个客户端已经格式化,生成文件系统类型,否则会造成数据破坏,甚至报错及集群出错等可能)
[root@master rbd]# mount /dev/rbd/rbd/isc /ceph/rbd2
[root@master rbd]# df -h
进入挂载目录观察另一客户端创建的文件及内容是否存在:
[root@master rbd]# cd /ceph/rbd2/
[root@master rbd2]# ls
1.txt 2.txt 3.txt
[root@master rbd2]# cat 1.txt
my test
[root@master rbd2]#
切到另一个客户端观察文件变化:
文件内容无变化,尝试重新映射观察变化:
[root@BDDB rbd2]# cd
[root@BDDB ~]# umount /ceph/rbd2/
[root@BDDB ~]# rbd unmap isc
[root@BDDB ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 60G 32G 29G 53% /
/dev/sda5 60G 3.5G 57G 6% /data
/dev/sda1 497M 148M 350M 30% /boot
tmpfs 184M 0 184M 0% /run/user/0
/dev/rbd0 3.9G 16M 3.6G 1% /ceph/rbd
[root@BDDB ~]#
重新映射后文件内容更新了:
[root@BDDB ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs 5f75d5de-3e02-43e3-a36d-57bc39e9a5ae /boot
├─sda2 xfs bab8e8ae-cb5e-4299-b879-54960f1a24b9 /
├─sda3 swap ce44af87-b8a7-40d2-8504-1c8fae81f613 [SWAP]
├─sda4
└─sda5 xfs cb8c9f72-8154-4ae9-aa57-54703dedfd06 /data
sr0
rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe /ceph/rbd
rbd1 xfs 42989fcc-0746-4848-a957-0c01704865b8
[root@BDDB ~]# mount /dev/rbd/rbd/isc /ceph/rbd2/
[root@BDDB ~]# cd /ceph/rbd2/
[root@BDDB rbd2]# ls
1.txt 2.txt 3.txt
[root@BDDB rbd2]# cat 1.txt
my test
my test
my test
my test
my test
观察编辑同一文件效果:
并无提示
更改内容:
出现不一致。
扩(缩)容:
[root@BDDB ~]# rbd resize --size 20480 rbd/idc
[root@BDDB ~]# rbd info idc
rbd image 'idc':
size 20480 MB in 5120 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.85456b8b4567
format: 2
features: layering
flags:
[root@BDDB ~]# resize2fs /dev/rbd0
查看文件依旧存在:
[root@BDDB ~]# cd /ceph/rbd
[root@BDDB rbd]# ls
123.txt 1.txt lost+found my1.txt my2.txt my3.txt
[root@BDDB rbd]# cat my1.txt
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.27.210 master
192.168.27.211 client1
192.168.27.212 client2
192.168.27.213 client3
192.168.26.112 BDDB.com
192.168.26.112 bddb.com bddb
小结: rbd能实现在多客户端远程访问rbd块设备映射做为本地存储设备挂载使用,但不支持同一时刻多客户端同时挂载使用,会出现数据异步的情况,导致数据错乱,因此是非共享型异步传输方式,rbd支持在线扩(缩)容,最新ceph支持 layering,striping exclusive lock, object map,fast diff ,deep-flatten 等新的new features
layering。
本文题目:RBD块设备在Ceph分布式存储中的具体应用
网页路径:http://ybzwz.com/article/giddce.html