Amministrazione Tanzu Management Cluster Grid (EN)
TKG administration to create a Workload cluster and add applications to this cluster
TKG administration to create a Workload cluster and add applications to this cluster
Last updated 10th November 2022
This guide documents how to create a Workload cluster and add applications to it.
OVHcloud provides services for which you are responsible, with regard to their configuration and management. It is therefore your responsibility to ensure that they work properly.
This guide is designed to assist you as much as possible with common tasks. However, we recommend contacting a specialist provider if you experience any difficulties or doubts when it comes to managing, using or setting up a service on a server.
We will deploy a Workload cluster on a Tanzu Kubernetes Grid management cluster and add an application.
At the end of the installation, you will have six new virtual machines in addition to the seven that are required for the management cluster to work.
The Workload cluster can be deployed on the same network as the administration cluster or on another network. In our case, we will deploy it on the same network for demonstration purposes. If you want to change networks, simply put the portgroup in the same configuration YAML of your workload cluster, as explained below in the VSPHERE_NETWORK
section. Ensure that the management cluster can communicate with the workload cluster.
Once you have added this cluster, you can then install applications.
Copy the file that was used to create the administration cluster into a file named tkg-workload-cluster.yaml.
cp ~/.config/tanzu/tkg/clusterconfigs/tkgmfile.yaml ~/tkg-workload-cluster.yaml
Edit the contents of the ~/tkg-workload-cluster.yaml file by changing these values:
CLUSTER_NAME: tkg-workload-cluster
VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.0.11
Run this command to create the cluster:
tanzu cluster create --file tkg-workload-cluster.yaml
Connect to the cluster with these commands:
# Authorisation of cluster connection
tanzu cluster kubeconfig get tkg-workload-cluster --admin
# Positioning on tkg-workload-cluster cluster
# Administrative accounts always have this form clustername-admin@clustername
kubectl config use-context tkg-workload-cluster-admin@tkg-workload-cluster
Log in to your vSphere interface to see the six virtual machines in the Workload cluster.
The Load Balancer links the Workload cluster network to the external network on the VMware cluster. To do this, we will use the kube-vip package, which will act as a load balancer between the internal network of the cluster and the VLAN10 network. You can find more information on the kube-vip documentation.
Run these commands from the Bootstrap virtual machine:
# Creating a folder to host the kube-vip application from git
mkdir ~/kube-vip
# Moving into this folder
cd ~/kube-vip
# Data recovery from GitHub
git clone https://github.com/vrabbi/tkgm-customizations.git
# Moving to the application subfolder
cd tkgm-customizations/carvel-packages/kube-vip-package/
# Applying pre-configuration
kubectl apply -n tanzu-package-repo-global -f metadata.yml
kubectl apply -n tanzu-package-repo-global -f package.yaml
Create the file ~/kube-vip/tkgm-customizations/carvel-packages/kube-vip-package/values.yaml with this content that corresponds to the IP addresses that can be used on VLAN10 to deploy an application.
vip_range: 192.168.0.210-192.168.0.250
Install the package using these commands:
# Installation
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
# Checking for the kubevip package
kubectl get packages -A
For demonstration purposes, we will deploy an application named yelb that uses 4 pods, one of which will be available via the kubevip load balancer. you can find more information on this example KUBERNETES YELB.
Run these commands to install a new application in the Workload cluster from the Bootstrap virtual machine.
# Creating a namespace for this application
kubectl create ns yelb
# Deploying the application from a source on the Internet
kubectl\
-n yelb apply -f\
https://raw.githubusercontent.com/lamw/yelb/master/deployments/platformdeployment/Kubernetes/yaml/yelb-k8s-loadbalancer.yaml
# Checking that the application is properly installed
kubectl get all -n yelb
The IP addresses internal to the KUBERNETES cluster appear in the CLUSTER-IP column, applications that are visible from outside the cluster have an IP address in the EXTERNAL-IP column.
In this example, the website is accessible via the address 192.168.0.223 on port 80.
In the Bootstrap console, use the web browser to connect to the URL http://192.168.0.223
.
An application can consist of multiple pods that communicate with each other through the internal network of the workflow cluster. Some ports are opened on the VMware cluster network using the kube-vip module.
Installing Tanzu Kubernetes Grid
Tanzu Kubernetes Grid VMware Overview
VMware de Tanzu Kubenetes Grid documentation
Manual installation of the CLI tool for the deployment of Tanzu Kubernetes GRID
Join our community of users on https://community.ovh.com/en/.
Prima di inviare la valutazione, proponici dei suggerimenti per migliorare la documentazione.
Immagini, contenuti, struttura... Spiegaci perché, così possiamo migliorarla insieme!
Le richieste di assistenza non sono gestite con questo form. Se ti serve supporto, utilizza il form "Crea un ticket" .
Grazie per averci inviato il tuo feedback.
Accedi al tuo spazio nella Community Fai domande, cerca informazioni, pubblica contenuti e interagisci con gli altri membri della Community OVHcloud
Discuss with the OVHcloud community