Access the cluster using rbd client
This guide shows you how to access your cluster using rbd client.
This guide shows you how to access your cluster using rbd client.
Last update 22th June 2022
There are different ways to use your Ceph cluster. We'll describe how to map your cluster using rbd client.
You must first ensure that you have done those steps :
For deb based distributions:
ubuntu@server:~$ sudo apt-get -y install ceph ceph-common
[...]
Setting up ceph-common (10.2.0-0ubuntu0.16.04.2) ...
Setting up ceph (10.2.0-0ubuntu0.16.04.2) ...
For rpm based distributions:
[centos@server ~]$ sudo yum install -y ceph-common
[...]
Installed:
ceph-common.x86_64 1:0.80.7-3.el7
Create file /etc/ceph/ceph.conf
[global]
mon_host = <mon_1_IP>,<mon_2_IP>,<mon_3_IP>
Create the file /etc/ceph/ceph.client.<ceph_user_name>.keyring
[client.<ceph_user_name>]
key = <my_user_key>
<mon_X_IP>
has to be replaced by monitors IP you can find on your Ceph as a Service manager..
<my_user_key>
has to be replaced by the users's key you can find on your Ceph as a Service manager..
You can check the configuration by listing the images inside your pool.
ubuntu@server:~$ rbd -n client.myuser list mypool
In this case, the result is empty because we have not have created an image yet. If you have an error, please double check your configuration.
You can't directly mount a pool, you have to mount an image that exists on the pool.
ubuntu@server:~$ rbd -n client.myuser create mypool/myimage -s $((10*1024*1024)) --image-format 2 --image-feature layering
ubuntu@server:~$ rbd -n client.myuser list mypool
myimage
We make sure that the image was created correctly by listing the pool content.
ubuntu@server:~$ sudo rbd -n client.myuser map mypool/myimage
/dev/rbd0
My rbd image is not mapped to /dev/rbd0, it's a block storage. Therefore we have to setup a filesystem.
ubuntu@server:~$ sudo mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=33, agsize=83885056 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=2684354560, imaxpct=5
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
ubuntu@server:~$ sudo mkdir /mnt/rbd
ubuntu@server:~$ sudo mount /dev/rbd0 /mnt/rbd
ubuntu@server:~$ df -h /mnt/rbd
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 10T 34M 10T 1% /mnt/rbd
You can now use your Ceph cluster!
Visit our dedicated Discord channel: https://discord.gg/ovhcloud. Ask questions, provide feedback and interact directly with the team that builds our Storage and Backup services.
Join our community of users on https://community.ovh.com/en/.
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.
Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.
Discuss with the OVHcloud community