#KubeWeek Challenge Day-6 Kubernetes Cluster Maintenance
Table of contents
- Upgrading the Cluster:
- Upgrading Kubernetes: A Step-by-Step Guide
- S.1#..Login into the first node and upgrade the kubeadm tool only:
- S.2#..Verify the upgrade plan:
- S.3#..Apply the upgrade plan:
- S.4#..Update Kubelet and restart the service:
- S.5#..Apply the upgrade plan to the other master nodes:
- S.6#..Upgrade kubectl on all master nodes:
- S.7#..Upgrade kubeadm on first worker node:
- S.8#..Login to a master node and drain first worker node:
- S.9#..Upgrade kubelet config on worker node:
- 10#..Upgrade kubelet on worker node and restart the service:
- 11#..Restore worker node:
- Backing Up and Restoring Data:
- Kubernetes backup solution market:
- Thank you for reading!Happy Learning!!
Upgrading the Cluster:
Regular upgrades are required to ensure that the cluster stays updated with the latest security features, bug fixes, and newly introduced features.
Upgrade the control plane
Upgrade the nodes in your cluster
Upgrade clients such as kubectl
Adjust manifests and other resources based on the API changes that accompany the new Kubernetes version.
Upgrading Kubernetes: A Step-by-Step Guide
Let’s follow the upgrade steps now:
S.1#..Login into the first node and upgrade the kubeadm tool only:
$ ssh admin@10.0.11.1
$ apt-mark unhold kubeadm && \
$ apt-get update && apt-get install -y kubeadm=1.13.0-00 && apt-mark hold kubeadm
The reason why we run apt-mark unhold and apt-mark hold is because if we upgrade kubeadm then the installation will automatically upgrade the other components like kubelet to the latest version (which is v1.15) by default, so we would have a problem. To fix that, we use hold to mark a package as held back, which will prevent the package from being automatically installed, upgraded, or removed.
S.2#..Verify the upgrade plan:
$ kubeadm upgrade plan
...
COMPONENT CURRENT AVAILABLE
API Server v1.13.0 v1.14.0
Controller Manager v1.13.0 v1.14.0
Scheduler v1.13.0 v1.14.0
Kube Proxy v1.13.0 v1.14.0
...
S.3#..Apply the upgrade plan:
$ kubeadm upgrade plan apply v1.14.0
S.4#..Update Kubelet and restart the service:
$ apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.14.0-00 && apt-mark hold kubelet
$ systemctl restart kubelet
S.5#..Apply the upgrade plan to the other master nodes:
$ ssh admin@10.0.11.2
$ kubeadm upgrade node experimental-control-plane
$ ssh admin@10.0.11.3
$ kubeadm upgrade node experimental-control-plane
S.6#..Upgrade kubectl on all master nodes:
$ apt-mark unhold kubectl && apt-get update && apt-get install -y kubectl=1.14.0-00 && apt-mark hold kubectl
S.7#..Upgrade kubeadm on first worker node:
$ ssh worker@10.0.12.1
$ apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.14.0-00 && apt-mark hold kubeadm
S.8#..Login to a master node and drain first worker node:
$ ssh admin@10.0.11.1
$ kubectl drain worker --ignore-daemonsets
S.9#..Upgrade kubelet config on worker node:
$ ssh worker@10.0.12.1
$ kubeadm upgrade node config --kubelet-version v1.14.0
10#..Upgrade kubelet on worker node and restart the service:
$ apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.14.0-00 && apt-mark hold kubelet
$ systemctl restart kubelet
11#..Restore worker node:
$ ssh admin@10.0.11.1
$ kubectl uncordon worker
Step 12: Repeat steps 7-11 for the rest of the worker nodes.
Step 13: Verify the health of the cluster:
$ kubectl get nodes
Backing Up and Restoring Data:
Kubernetes cluster maintenance involves backing up and restoring data, which is crucial for disaster recovery plans. The etcd database stores all API objects and settings, and backing it up is sufficient to restore the Kubernetes cluster's state completely. Kubernetes offers various backup methods for data stored in persistent volumes.
it’s possible to split all of the data and config file types in two different categories: configuration and persistent data.
Configuration (and desired-state information) includes:
Kubernetes etcd database
Docker files
Images from Docker files
Persistent data (changed or created by containers themselves) are:
Databases
Persistent volumes
Kubernetes backup solution market:
Kasten K10
Portworx
Cohesity
OpenEBS
Rancher Longhorn
Rubrik
Druva
Zerto
To ensure data resiliency, it is crucial to regularly back up and be prepared to restore your Kubernetes cluster data. Consider the following steps for data backup and restoration:
Step 1: Identify critical data to be backed up:
Determine which Kubernetes resources and configurations need to be included in the backup.
Step 2: Backup the cluster data:
Use the following command to back up the cluster data, replacing <backup-directory>
with the desired backup location:
kubectl cluster-info dump --output-directory=<backup-directory>
Step 3: Restore the cluster data:
To restore the cluster data from a backup, use the following command, replacing <backup-directory>
with the backup location:
kubectl cluster-info restore --from=<backup-directory>
Scaling the Cluster:
Scaling a Kubernetes cluster is updating the cluster by adding nodes to it or removing nodes from it. When you add nodes to a Kubernetes cluster, you are scaling up the cluster, and when you remove nodes from the cluster, you are scaling down the cluster.
Types of Auto Scaling in Kubernetes:
By default, Kubernetes supports three types of autoscaling:
Horizontal Scaling (Scaling Out):
Horizontal scaling involves altering the number of pods available to the cluster to suit sudden changes in workload demands. As the scaling technique involves scaling pods instead of resources, it’s commonly a preferred approach to avoid resource deficits.Vertical Scaling (Scaling Up): Contrary to horizontal scaling, a vertical scaling mechanism involves the dynamic provisioning of attributed resources such as RAM or CPU of cluster nodes to match application requirements. This is essentially achieved by tweaking the pod resource request parameters based on workload consumption metrics.
The scaling technique automatically adjusts the pod resources based on the usage over time, thereby minimizing resource wastage and facilitating optimum cluster resource utilization. This can be considered an advantage when comparing Kubernetes horizontal vs. vertical scaling.
- Cluster/Multidimensional Scaling: Cluster scaling involves increasing or reducing the number of nodes in the cluster based on node utilization metrics and the existence of pending pods. The cluster autoscaling object typically interfaces with the chosen cloud provider so that it can request and deallocate nodes seamlessly as needed.
Multidimensional scaling also allows a combination of both horizontal and vertical scaling for different resources at any given time. While doing so, the multidimensional autoscaler ensures there are no idle nodes for an extended duration and each pod in the cluster is precisely scheduled.
Thank you for reading!Happy Learning!!
Santosh Chauhan
Subscribe to my newsletter
Read articles from Santosh Chauhan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Santosh Chauhan
Santosh Chauhan
Hello, I'm Santosh Chauhan, a DevOps enthusiast, who enjoys automation, continuous integration, and deployment. With extensive Hands-on experience in DevOps and Cloud Computing, I am proficient in various tools and technologies related to infrastructure automation, containerization, cloud platforms, monitoring and logging, and CI/CD. My ultimate objective is to assist organizations in achieving quicker, more effective software delivery while maintaining high levels of quality and dependability.