Last updated July 28th, 2021.
OVHcloud Managed Kubernetes service provides you Kubernetes clusters without the hassle of installing or operating them.
During the day-to-day life of your cluster, you may want to dynamically adjust the size of your cluster to accommodate to your workloads. The cluster autoscaler simplifies the task by scaling up or down your OVHcloud Managed Kubernetes cluster to meet the demand of your workloads.
This guide follows up the Using the cluster autoscaler guide, and it will cover a description of the cluster autoscaler configuration.
- An OVHcloud Managed Kubernetes cluster
- A basic understanding of how the cluster autoscaler works and how to enable it.
Cluster autoscaler configuration
When the autoscaler is enabled on a node pool, is uses a default configuration.
Here you have a description of the parameters used in the autoscaler configuration, and their default value:
|scan-interval||How often cluster is reevaluated for scale up or down||10 seconds|
|scale-down-delay-after-add||How long after scale up that scale down evaluation resumes||10 minutes|
|scale-down-delay-after-delete||How long after node deletion that scale down evaluation resumes||scan-interval|
|scale-down-delay-after-failure||How long after scale down failure that scale down evaluation resumes||3 minutes|
|scale-down-unneeded-time||How long a node should be unneeded before it is eligible for scale down||10 minutes|
|scale-down-unready-time||How long an unready node should be unneeded before it is eligible for scale down||20 minutes|
|scale-down-utilization-threshold||Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down||0.5|
|max-graceful-termination-sec||Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node||600 seconds|
|balance-similar-node-groups||Detects node pools with the same instance type and the same set of labels and balances the number of nodes between them||false|
|expander||Type of node pool expander to be used in scale up. Possible values: most-pods, random, least-waste, priority||random|
|skip-nodes-with-local-storage||If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath||false|
|skip-nodes-with-system-pods||If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods)||false|
|max-empty-bulk-delete||Maximum number of empty nodes that can be deleted at the same time||10 nodes|
|new-pod-scale-up-delay||For scenarios like burst/batch scale where you don't want the cluster autoscaler to act before the kubernetes scheduler could schedule all the pods, you can tell the CA to ignore unscheduled pods before they're a certain age.||0 seconds|
|max-total-unready-percentage||Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, the CA halts operations||45%|
|max-node-provision-time||Maximum time the autoscaler waits for a node to be provisioned||15 minutes|
|ok-total-unready-count||Number of allowed unready nodes, irrespective of max-total-unready-percentage||3 node|
You can get more information on those parameters on the Kubernetes autoscaler documentation. If you consider we should reevaluate the default value and/or prioritize the possible customization of of one of those parameters, we are looking for your feedback concerning this beta feature in the Gitter community channel around OVHcloud Managed Kubernetes service.
To have an overview of OVHcloud Managed Kubernetes service, you can go to the OVHcloud Managed Kubernetes page.
Otherwise to skip it and learn more about using your Kubernetes cluster the practical way, we invite you to look at our tutorials .
Join our community of users.
Did you find this guide useful?
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.