Configuring multi-attach persistent volumes with OVHcloud NAS-HA

Find out how to configure a multi-attach persistent volume using OVHcloud NAS-HA

Last updated July 20th, 2021.

Objective

OVHcloud Managed Kubernetes natively integrates Block Storage as persistent volumes. This technology may however not be suited to some legacy or non cloud-native applications, often requiring to share this persistent data accross different pods on multiple worker nodes (ReadWriteMany or RWX). If you would need to do this for some of your workloads, one solution is to use NFS volumes. OVHcloud NAS-HA is a managed solution that lets you easily configure an NFS server and multiple NFS volumes. In this tutorial we are going to see how to configure your OVHcloud Managed Kubernetes cluster to use OVHcloud NAS-HA as an NFS provider for Kubernetes Persistent Volumes.

Requirements

This tutorial assumes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the deploying a Hello World application documentation.

It also assumes you have an OVHcloud NAS-HA already available. If you don't, you can order one in the OVHcloud Control Panel.

You also need to have Helm installed on your workstation, please refer to the How to install Helm on OVHcloud Managed Kubernetes Service tutorial.

Instructions

Step 1 - Creating a partition and granting your Managed Kubernetes Service access to it

Your NAS-HA can expose multiple partitions, and supports a variety of protocols. Each partition is accessible only from a specific range of IPs. We will create one exposing NFS and make it accessible from your Kubernetes worker nodes.

Access the UI for OVHcloud NAS-HA by clicking the NAS and CDN menu in the Server section of the OVHcloud Control Panel

Create a new NFS partition in your Zpool:

Create an NFS partition

Once the partition is created, we need to allow our Kubernetes nodes to access our newly created partition.

Get our Kubernetes nodes IP:

kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
$ kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
51.77.204.175 51.77.205.79

Click on the Manage Access menu of our newly created partition: Manage Access of the NFS partition

Add our nodes IP one by one: Allow nodes IP to access the NFS partition

You should now have something similar to this: Zpool-configuration

In this example our ZPOOL_IP is 10.201.18.33, our ZPOOL_NAME is zpool-127659, and our PARTITION_NAME is kubernetes. Please modify this accordingly in the later steps.

Step 2 - Configuring Kubernetes to use our newly created NFS partition

Your Kubernetes cluster needs some additionnal piece of software to make use of the NFS partition. We will install those and then create a first volume, shared accross multiple pods.

First, let's create a values.yaml configuration file for the NFS subdir external provisioner Helm installation:

nfs:
  server: '[ZPOOL_IP]'
  path: '/[ZPOOL_NAME]/[PARTITION_NAME]'
  mountOptions:
    - tcp
    - nfsvers=3
storageClass:
  name: nfs

And then install the nfs-subdir-external-provisioner:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner -n kube-system nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -f values.yaml
$ helm install nfs-subdir-external-provisioner -n kube-system nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -f values.yaml
NAME: nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Jun  8 14:39:57 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let's verify our installation:

kubectl get deploy nfs-subdir-external-provisioner -n kube-system
$ kubectl get deploy nfs-subdir-external-provisioner -n kube-system
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
nfs-subdir-external-provisioner   1/1     1            1           36s

Step 3 - Create and use an NFS persistent volume

Let’s create a nfs-persistent-volume-claim.yaml file:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: nfs
  resources:
    requests:
      storage: 1Gi

And apply this to create the persistent volume claim:

kubectl apply -f nfs-persistent-volume-claim.yaml

Let’s now create two Nginx pods using the persistent volume claim as their webroot folder on two different kubernetes nodes. In this example the kubernetes nodes are called kubernetes-node-1 and kubernetes-node-2, please modify this accordingly. Let’s create a nfs-nginx-pods.yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: nfs-nginx-1
  namespace: default
spec:
  nodeName: kubernetes-node-1
  volumes:
    - name: nfs-volume
      persistentVolumeClaim:
        claimName: nfs-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nfs-volume

---

apiVersion: v1
kind: Pod
metadata:
  name: nfs-nginx-2
  namespace: default
spec:
  nodeName: kubernetes-node-2
  volumes:
    - name: nfs-volume
      persistentVolumeClaim:
        claimName: nfs-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nfs-volume

And apply this to create the Nginx pods:

kubectl apply -f nfs-nginx-pods.yaml

Let’s enter inside the first Nginx container to create a file on the NFS presistent volume:

kubectl exec -it nfs-nginx-1 -n default -- bash

Create a new index.html file:

echo "NFS volume!" > /usr/share/nginx/html/index.html

And exit the Nginx container:

exit

Let’s try to access our new web page:

kubectl proxy

And open the URL http://localhost:8001/api/v1/namespaces/default/pods/http:nfs-nginx-1:/proxy/

Now let’s try to see if the data is shared with the second pod. Open the URL http://localhost:8001/api/v1/namespaces/default/pods/http:nfs-nginx-2:/proxy/

As you can see the data is correctly shared between the two Nginx pods running on two different Kubernetes nodes. Congratulations, you have successfully set up a multi-attach persistent volume with OVHcloud NAS-HA!

Go further

To learn more about using your Kubernetes cluster the practical way, we invite you to look at our OVHcloud Managed Kubernetes doc site.

Join our community of users.


Did you find this guide useful?

Please feel free to give any suggestions in order to improve this documentation.

Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.

Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.

Thank you. Your feedback has been received.


These guides might also interest you...

OVHcloud Community

Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.

Discuss with the OVHcloud community