Last updated 19th October 2021.
In this tutorial we are going to guide you with a simple example of setting-up a Persistent Volume (PV) on your OVHcloud Managed Kubernetes Service.
Before you begin
This tutorial presupposes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
It also supposes you have read our Persistent Volumes on OVHcloud Managed Kubernetes guide.
When a Persistent Volumes resource is created inside a Managed Kubernetes cluster, an associated Public Cloud Block Storage volume is automatically created with it. This volume is hourly charged and will appear in your Public Cloud project. For more information, please refer to the following documentation: Volume Block Storage price
Persistent Volumes (PV) and Persistent Volume Claims (PVC)
As the official documentation states:
- A
PersistentVolume
(PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. - A
PersistentVolumeClaim
(PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
Or, if you prefer an analogy, PVC are to PV like pods are to nodes.
PVC consume abstract storage resources (the PVs) as Pods consume node resources.
So you want some persistent storage on your cluster
Let's say you need some persistent storage on your cluster, some kind of network storage for OVHcloud Managed Kubernetes Service that currently means a storage based on Cinder. In Kubernetes terms you will need two objects: a PersistentVolumeClaim
and its associated PersistentVolume
.
How do you get them? You simply need to create the PVC object in your cluster. Kubernetes will see your claim and, according to its available resources, allocate a PV corresponding to your claim.
Let's create a PVC
Copy the next YAML fragment into a test-pvc.yaml
file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-cinder-high-speed
And apply it to your cluster:
kubectl apply -f test-pvc.yaml
Then you can see the PVC and the associated PV using kubectl
:
kubectl get pvc
kubectl get pv
$ kubectl apply -f test-pvc.yaml
persistentvolumeclaim/test-pvc created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc Bound ovh-managed-kubernetes-btw8lc-pvc-LONG-ID 10Gi RWO csi-cinder-high-speed 5s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
ovh-managed-kubernetes-btw8lc-pvc-LONG-ID 10Gi RWO Delete Bound default/test-pvc csi-cinder-high-speed 15s
Using the PVC
Pods access storage by using the PVC as a volume. In the pod manifest you declare a volume and associate it to a PVC. The volume is then mounted to the host and into the pod.
For our example, let's create a test-pvc-pod.yaml
file: that deploys a simple Nginx server using our test-pvc
PVC as external volume:
apiVersion: v1
kind: Pod
metadata:
name: test-pvc-pod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pvc
And apply it to the cluster:
kubectl apply -f test-pvc-pod.yaml
$ kubectl apply -f test-pvc-pod.yaml
pod/test-pvc-pod created
$ kubectl describe pod test-pvc-pod
Name: test-pvc-pod
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node-03/51.75.199.0
Start Time: Mon, 06 Jan 2020 11:38:16 +0100
Labels: <none>
[...]
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
[...]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 70s default-scheduler Successfully assigned default/test-pvc-pod to node-03
Normal SuccessfulAttachVolume 68s attachdetach-controller AttachVolume.Attach succeeded for volume "ovh-managed-kubernetes-btw8lc-pvc-LONG-ID"
Normal Pulling 60s kubelet, node-03 Pulling image "nginx"
Normal Pulled 59s kubelet, node-03 Successfully pulled image "nginx"
Normal Created 59s kubelet, node-03 Created container myfrontend
Normal Started 59s kubelet, node-03 Started container myfrontend
Storage Classes
We currently support two Storage Classes on OVHcloud Managed Kubernetes: csi-cinder-high-speed
and csi-cinder-classic
, both based on Cinder, the OpenStack Block Storage service. The difference between them is the associated physical storage device. csi-cinder-high-speed
uses SSD, while csi-cinder-classic
uses traditional spinning disks. Both are distributed transparently, on three physical local replicas.
When you create a Persistent Volume Claim on your Kubernetes cluster, we provision the Cinder storage into your account. This storage is charged according to the OVH flexible cloud storage prices.
Since Kubernetes 1.11, support for expanding PersistentVolumeClaims (PVCs) is enabled by default, and it works on Cinder volumes. In order to learn how to resize them, please refer to the Resizing Persistent Volumes tutorial. Kubernetes PVCs resizing only allows to expand volumes, not to decrease them.
Access Modes
The way a PV can be mounted on a host depends on the capabilities of the resource provider. Each PV gets its own set of access modes describing that specific PV’s capabilities:
ReadWriteOnce
: the PV can be mounted as read-write by a single nodeReadOnlyMany
: the PV can be mounted read-only by many nodesReadWriteMany
: the PV can be mounted as read-write by many nodes
Our storage resource, Cinder, doesn't allow to mount a PV on several nodes at the same time, so you need to use the ReadWriteOnce
access mode.
Reclaim policies
When you are done with a volume, you can delete the PVC, to liberate the resource. At that instant you have an unbounded PV object on your cluster, and its fate depends on the reclaim policy you have chosen in your PVC.
There are 2 possible reclaim policies:
-
Retain
: When the PVC is deleted, the PV still exists. The volume is considered released, but it is not yet available because the previous data remains on the volume. If you want to delete it, you must do it manually. -
Delete
: when the PVC is deleted, the PV and the associated storage in the external infrastructure (i.e. the Cinder storage in our case) are both deleted.
For every Storage Class there is a reclaim policy set by default, that can be changed for individual instances of PV. On our Cinder based storage classes, the reclaim policy by default is Delete
, as you can verify:
kubectl get pv
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
ovh-managed-kubernetes-btw8lc-pvc-LONG-ID 10Gi RWO Delete Bound default/test-pvc csi-cinder-high-speed 15s
If you delete the PVC, the associated PV is also deleted:
kubectl delete pvc test-pvc
kubectl get pvc
kubectl get pv
$ kubectl delete pvc test-pvc
persistentvolumeclaim "test-pvc" deleted
$ kubectl get pvc
No resources found.
$ kubectl get pv
No resources found.
Changing the reclaim policy
To illustrate how to change the reclaim policy, let's begin by creating a new PVC using the test-pvc.yaml
file:
kubectl apply -f test-pvc.yaml
List the PV and get its name:
kubectl get pv
And patch it to change its reclaim policy:
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Where <your-pv-name>
is the name of your chosen PersistentVolume.
Now you can verify that the PV has the right policy:
kubectl get pv
$ kubectl apply -f test-pvc.yaml
persistentvolumeclaim/test-pvc created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
ovh-managed-kubernetes-btw8lc-pvc-LONG-ID 10Gi RWO Delete Bound default/test-pvc csi-cinder-high-speed 19s
$ kubectl patch pv ovh-managed-kubernetes-btw8lc-pvc-LONG-ID -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/ovh-managed-kubernetes-btw8lc-pvc-LONG-ID patched
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
ovh-managed-kubernetes-btw8lc-pvc-LONG-ID 10Gi RWO Retain Bound default/test-pvc csi-cinder-high-speed 19s
In the preceding output, you can see that the volume bound to PVC default/test-pvc
has reclaim policy Retain
.
It will not be automatically deleted when a user deletes PVC default/test-pvc
Did you find this guide useful?
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.