Last updated July 25th, 2022.
Objective
OVHcloud Managed Kubernetes service provides you Kubernetes clusters without the hassle of installing or operating them.
By default, your Kubernetes clusters will have public IPs. For some uses cases, or for security reasons, you might prefer having your Kubernetes cluster inside a private network.
OVHcloud vRack is a private networking solution that enables our customers to route traffic between OVHcloud dedicated servers as well as other OVHcloud services.
When your Managed Kubernetes and your other services are both in the vRack, but in different private networks, some extra configuration is needed. In this document, you will find an explanation of why this extra configuration is needed and how to do it.
You can now create and use a custom gateway on an OVHcloud Managed Kubernetes cluster but if you don't want to, the content of this guide is still relevant.
In this document we suppose that you have some understanding of how to use OVHcloud Managed Kubernetes in the vRack. To get more information on this topic you can look at the Using vRack Private Network guide and Working with vRack example - Managed Kubernetes and Public Cloud instances tutorial.
Networking in Managed Kubernetes inside the vRack
In order to better see why some extra configuration is needed, let's begin by explaining how the OVHcloud Managed Kubernetes integration with vRack private networks is done.
OVHcloud Managed Kubernetes without vRack
Let's have a look at our Managed Kubernetes without vRack. Both the master and the nodes have IP addresses in a network exposed on the internet:
All the traffic between the master and the nodes is done using these IP addresses, as does administrative traffic and traffic to/from resources outside the cluster.
OVHcloud Managed Kubernetes inside vRack
When you put an OVHcloud Managed Kubernetes cluster in a private network in the vRack, a new network interface connected to this private network is added to every node.
Using the addresses and names in the schema, each node has a eth0
network interface to the external network, and a eth1
to the private network. eth0
is dedicated to the communication between nodes and master, to your managed service administration traffic and to communication with external services. Pod to pod traffic, and traffic to the private network is routed through eth1
.
In order to allow this routing, the default gateway for each node is in the external network, via their eth0
interface, and only the traffic towards the private network is routed through eth1
.
For this use case, no additional configuration is needed, you simply have to choose the private network at the creation of your Managed Kubernetes cluster, as explained in the Using vRack Private Network guide and Working with vRack example - Managed Kubernetes and Public Cloud instances tutorial.
Communication between different private networks
In some use cases you don't want to have a single private network, but several ones, while keeping the capacity to communicate between them (one of the strong points of the vRack is to allow a transparent communication between your private networks).
This use case currently asks for additional configuration on the OVHcloud Managed Kubernetes cluster side.
The need of extra manual configuration described in this guide is temporary. Our Managed Kubernetes team is working on a more streamlined solution, as explained in this issue in our Public Cloud roadmap.
The problem
The reason is the networking model we have detailed in the precedent point. Let's adapt the precedent schema to make the PCI virtual machine to be in a different private network than the Managed Kubernetes cluster:
As explained before, in order to allow pod to master communication, the default gateway for Managed Kubernetes nodes is in the external network, by eth0
, and only the traffic to the private network to which the cluster is attached is routed to eth1
.
That means that if in our schema Pod 3 wants to communicate with the PCI vm1, that is in a different private network, the traffic won't get routed to eth1
but to eth0
towards the default gateway, that has no access to vm1, thus the connection fails.
The solution
The solution to this problem is to push the routes to the additional private networks via the DHCP of the private networks. This informs the nodes that traffic to the private networks is to be sent via eth1
instead of eth0
:
With this setup, if in our schema Pod 3 wants to communicate with the PCI vm1, that is in a different private network, the traffic is routed to eth1
, and thus to vm1:
Go further
Join our community of users.
Did you find this guide useful?
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.