Increasing maximum pod count on k8s node
Categories: tech
Tags: kubernetes kubelet
In my homelab one node is 6-15x bigger depending on dimension prior to even talking about storage. CPU and memory are detected within Kubernetes however the maximum pod count does not. I have pushed past that threshold with my mostly idle pods and would like to increase the count. In theory this will reduce reliability in a failure case, or more likely in maintenance operations such as restarting or quarterly dust removal. Currently, my cluster only hosts 167 pods meaning the cluster will be able to fail over as long as I do not pass the 220 threshold…and can run in the memory and CPU architecture constraints :-) .
Anyway an article
approaching this from the SystemD unit side seems to be the first hit. Ideally this would be KubeletConfiguration
instead.
kubelet.config.k8s.io/v1beta1
kubelet.config.k8s.io/v1beta1
is the configuration namespace. With CredentialProviderConfig
I might be able to stop injecting pull secrets into every namespace. I will have to consider the ramifications of this
later.
Despite not being able to directly link to the docs, maxPods
does exist as a field on KubeletConfiguration
.
Thumbing through reminds me I have a number of other fields I would like to add at another time, such as verifying
Topology Manager is set up correctly, k8s is
made NUMA aware and I learn about
CPU management.
Accessing
Sadly kubectl get KubeletConfiguration --all-namespaces
fails with error: the server doesn't have a resource type "KubeletConfiguration"
. Nor does it show up in something like OpenLens. After some research it looks like this is
just housed in the files for each node which is a bit unexpected, but I guess makes some sense. Definitely prevents
a malicious actor from running a pod which then reconfigures nodes. Looks like nodes process from k8s themselves
is to:
1.) Update teh configuration: kubectl edit cm -n kube-system kubelet-config
2.) Run kubeadm upgrade node phase kubelet-config
3.) systemctl restart kubelet
These require restarts of kubelet
including moving workloads. Since my family is home on Spring Break this week I
will need to defer this update until then.