Kubernetes: The End!

Table of contents

I completely understand if you're eager to learn Kubernetes or if you found this blog because you're feeling overwhelmed by it. I know how challenging it can be to learn Kubernetes. I've faced many problems and confusions myself, and at times, I even thought about giving up. But as DevOps Engineers, giving up is not in our nature or even in our DevOps Lifecycle ♾️.
That's why I'm sharing one of my best blogs to clear up any confusion and give you a clear, achievable path to learning Kubernetes. What does the title Kubernetes: The End! mean? It's simple—it marks the end of confusion and the idea that it's unachievable. So don't worry and follow along. Before we begin, you should have some knowledge of Docker and YAML and you are good to go. Let's get started!
Problem Statement:
Suppose you are running 100s of containers to host your applications. Without Kubernetes, managing these containers becomes extremely challenging because:
- Manual Monitoring: You would need to constantly check if all 100 containers are running properly. If one container crashes, you might not even know unless you check manually.
- Example: A container running a critical service crashes at midnight. Without monitoring, you only find out when users complain the next morning.
- Manual Recovery: If a container goes down, you would have to manually restart it. If multiple containers crash, restarting them one by one takes a lot of time and effort.
- Example: If 10 containers fail at the same time, you’d have to log into multiple systems and restart each one manually, wasting hours.
- Scaling Issues: When user traffic increases, you need more containers to handle the load. Without Kubernetes, scaling up (or down) requires manual intervention, which is slow and error-prone.
- Example: If 10,000 users suddenly visit your app, you’d need to manually start extra containers, which might be too late by the time you react.
No Automation: Tasks like updating applications in containers, distributing traffic, or restarting failed containers need to be done manually, making the system prone to human error.
Monitoring Health: Monitoring the health and performance of 100 containers across multiple machines is overwhelming without a centralized system.
How Kubernetes Solves this Problem:
Kubernetes automatically monitors all containers and ensures they are running properly.
If a container crashes, Kubernetes restarts it automatically without any manual intervention.
It can scale up or down containers based on traffic, ensuring resources are used efficiently.
Kubernetes provides a centralized dashboard or command-line tools to monitor all containers in one place.
It ensures zero-downtime updates, so you can roll out new versions without disrupting users.
Example with Kubernetes:
Let’s say 5 out of your 100 containers go down at 2 AM. Kubernetes immediately detects this, restarts those containers, and ensures your app is back to normal without you even knowing there was an issue. At the same time, it monitors traffic, and if user demand spikes, Kubernetes automatically adds more containers to handle the load.
In summary, Kubernetes solves the problem of managing, scaling, and monitoring large numbers of containers, making your system more reliable, automated, and easier to manage.
What is Kubernetes?
Kubernetes is an open-source platform that helps manage containerized applications. Containers are like isolated environments that hold your app and everything it needs to run. Kubernetes makes it easier to deploy, scale, and manage these containers across a group of computers (called a cluster).
History of Kubernetes:
Google’s Early Challenges:
In the early 2000s, Google was running thousands of applications on its massive infrastructure. Managing these applications efficiently was a challenge, especially as they relied heavily on containers (even before Docker became popular).
Borg:
To solve this, Google developed Borg, an internal cluster management system. Borg helped Google orchestrate and manage its containers at scale, handling tasks like scheduling, scaling, and resource optimization.
Docker’s Rise:
In 2013, Docker revolutionized containers, making them more accessible and widely adopted. However, as developers started using Docker to run containerized apps, they faced the same problem Google had earlier: how to manage containers at scale.
Birth of Kubernetes:
In 2014, Google decided to take its learnings from Borg and develop a new, open-source system to manage containers. This project was named Kubernetes (often abbreviated as K8s). It was designed as a simpler, more developer-friendly version of Borg, incorporating lessons from years of container orchestration experience.
Donation to CNCF:
In 2015, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) to foster open development and adoption. This move helped Kubernetes gain widespread community support and become the industry standard for container orchestration.
Kubernetes Architecture:
Before we get into setting up the Kubernetes cluster, it is important to understand Kubernetes architecture.
Node (Minions):
A node is a server, either physical or virtual, where Kubernetes is installed. It acts as a worker machine (worker node) where containers are hosted and launched by Kubernetes. But what happens if the node running the application fails? Naturally, our app would go down. Therefore, we need more than one node.
A cluster is a set of nodes grouped together. This way even if our one node fails, we will have our application accessible from the other nodes. Running your Kubernetes cluster on multiple nodes ensures high availability, fault tolerance, and scalability.
Now that we have a cluster, who manages it? How are the nodes monitored? If a node fails, how do you move the workload to another worker node? That's where the master node comes in.
Master Node:
A Master is another node with K8s installed in it and it is configured as master. The Master watches over the node and is responsible for the actual orchestration of containers on the worker nodes.
When you download Kubernetes, you download the following components:
1. API Server
What it does:
It's like the reception desk of Kubernetes. All requests (from the CLI, dashboard, or other tools) go to the Kubernetes cluster through the API server.Example:
When you create a new pod, the API server receives this request, checks if it’s valid, and forwards it to the right component.
2. etcd
What it does:
It’s the database of Kubernetes, where all the cluster’s data is stored.Example:
It stores details like which pods are running, which nodes are available, and cluster configurations. If etcd goes down, Kubernetes loses its memory.
3. Scheduler
What it does:
The planner of Kubernetes. It decides which node should run a new pod based on resources like CPU, memory, or other conditions.Example:
If a pod needs to be created, the scheduler looks at all the nodes and picks the best one to run it.
4. Controller Manager
What it does:
The supervisor of Kubernetes. It watches the cluster to ensure everything is running as planned and fixes issues automatically.Example:
If a pod crashes, the controller manager notices it and creates a new pod to replace the failed one.
5. Kubelet
What it does:
The worker on each node. It talks to the API server and makes sure the containers on the node are running as expected.Example:
If the API server tells a node to start a new container, the kubelet on that node makes it happen.
6. Container Runtime
What it does:
The engine that actually runs your containers. Kubernetes doesn’t directly run containers; it uses tools like Dockeror containerd to do it.Example:
If a pod needs a container, the container runtime pulls the container image (e.g., from Docker Hub) and starts it.
7. Kube-Proxy
What it does:
The network manager. It makes sure that pods and services can communicate with each other, even if they are on different nodes.Example:
If a pod on Node A needs to talk to a service running on Node B, kube-proxy ensures the connection happens smoothly.
Master Slave Architecture:
How does one server become Master and other becomes Slave?
The Master server has kube-apiserver and that what makes him a Master.
Similarly, the worker nodes have the kubelet agent that is responsible for interacting with the Master to provide health information of the worker node.
Installation:
kubectl:
Kubernetes' command-line tool, kubectl, lets you run commands for your Kubernetes clusters.
Download: https://kubernetes.io/docs/tasks/tools/
Verify your download with:
kubectl version
Minikube:
Minikube is a tool that lets you run Kubernetes locally.
Download: https://kubernetes.io/docs/tasks/tools/
Verify it with:
minikube version
Subscribe to my newsletter
Read articles from Sahil Naik directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sahil Naik
Sahil Naik
💻 Sahil learns, codes, and automates, documenting his journey every step of the way. 🚀