Using Ceph with Proxmox
Find out how to set up the Cloud Disk Array on Proxmox
Find out how to set up the Cloud Disk Array on Proxmox
Last updated 1st July 2021
This guide explains how to set up the Cloud Disk Array on Proxmox.
Fist of all, you need your Cloud Disk Array up and ready. Make sure you have:
In this guide we assume you already have your Proxmox installation up and running. This guide has been tested with Proxmox 6.4.
In order to use Cloud Disk Array, Proxmox needs to know how to access it. This is done by adding the necessary data to the /etc/pve/storage.cfg
file.
Log in to your Proxmox node, open the file and enter the following lines:
rbd: ovhcloud-cda
monhost 10.0.0.1 10.0.0.2 10.0.0.3
content images
pool vms
username proxmox
monhost
: the IP list of CDA cluster monitorscontent
: the content type you want to host on the CDApool
: the CDA pool name that will be used to store datausername
: the username of the user connecting to the CDAYour cluster is now configured. To be able to authenticate, Proxmox will also need the keyring.
In order to add the keyring, edit the file /etc/pve/priv/ceph/<STORAGE_ID>.keyring
. Replace <STORAGE_ID>
with the actual name you used in the storage.cfg
file. In the following example output, the name is ovhcloud-cda
.
root@proxmox:~$ cat /etc/pve/priv/ceph/ovhcloud-cda.keyring
[client.proxmox]
key = KLChQNJYQJCuXMBBrbsz2XllPn+5+cuXdIfJLg==
You can now see your cluster info using the Proxmox web interface and create a VM on this storage.
In order to use CephFS, you need to enable it through the Cloud Disk Array API. The user defined in the first step can be used to access both RBD and CephFS.
The user has to be granted access to the cephfs.fs-default.data
and cephfs.fs-default.meta
pools. After that, add the following lines to your /etc/pve/storage.cfg
config file.
cephfs: ovhcloud-cda-cephfs
monhost 10.0.0.1 10.0.0.2 10.0.0.3
path /mnt/pve/cephfs
content backup,vztmpl
username proxmox
monhost
: the IP list of CDA cluster monitorscontent
: the content type you want to host on the CDAusername
: the username of the user connecting to the CDACephFS is now configured. You need to add the secret of your Proxmox user (proxmox
in this example), so Proxmox can authenticate.
Edit the file /etc/pve/priv/ceph/<STORAGE_ID>.secret
. Replace <STORAGE_ID>
with the actual name you used in the storage.cfg
file. In the following example output, the name is ovhcloud-cda-cephfs
.
Unlike with the RBD keyring, you need to provide only the secret.
root@proxmox:~$ cat /etc/pve/priv/ceph/ovhcloud-cda-cephfs.secret
KLChQNJYQJCuXMBBrbsz2XllPn+5+cuXdIfJLg==
You can now download container templates and store them on CephFS:
root@pve:~$ pveam update
root@pve:~$ pveam available --section system
root@pve:~$ pveam download ovhcloud-cda-cephfs ubuntu-20.04-standard_20.04-1_amd64.tar.gz
Once a template has been downloaded, you can start using it to create containers.
Visit our dedicated Discord channel: https://discord.gg/ovhcloud. Ask questions, provide feedback and interact directly with the team that builds our Storage and Backup services.
Join our community of users on https://community.ovh.com/en/.
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.
Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.
Discuss with the OVHcloud community