Understanding Kubernetes: A Guide to Its Architecture


what is Kubernetes
Kubernetes, often referred to as K8s, is an open-source tool for container management.
This platform offers a range of features, including container runtime, orchestration, service discovery, load balancing, and dynamic scaling. It also ensures infrastructure orchestration and self-healing capabilities, making it robust and reliable for managing containerized applications.
Originally developed by Google to handle applications in clustered environments, Kubernetes was later contributed to the Cloud Native Computing Foundation (CNCF) to foster its growth as a global standard.
Built using the Go programming language (Golang), Kubernetes is designed to efficiently manage the lifecycle of containerized services and applications. Its methods emphasize scalability, predictability, and high availability, ensuring seamless operations across diverse environments.
Kubernetes Architecture
Kubernetes follow the Cluster Architecture, A Kubernetes cluster is a set of physical or virtual machines
Each machine in a Kubernetes cluster is called a node. There are two types of nodes in each Kubernetes cluster:
Master Node
Worker Node
Kubernetes Master
The Kubernetes master is the core component responsible for managing the entire cluster. It serves as the access point for administrators and users, allowing interactions via the Command-Line Interface (CLI), graphical user interface (GUI), or API.
Its primary functions include monitoring worker nodes within the cluster and orchestrating containers across them. To ensure fault tolerance, a cluster may have multiple master nodes.
The master comprises four key components:
ETCD:
A distributed, reliable key-value store that holds all data required for cluster management. It stores information across all nodes and masters in the cluster, ensuring data consistency and preventing conflicts through locking mechanisms.
Scheduler:
Responsible for assigning containers to nodes within the cluster, it identifies newly created containers and ensures their proper placement.
API Server:
Acts as the communication hub between the master and the cluster. It validates and executes REST commands and ensures configurations in ETCD align with container deployments in the cluster.
Controller Manager:
Operates control loops to maintain the desired state of the cluster, handling tasks like scaling deployments and maintaining replicas. It also responds proactively to failures in nodes, containers, or endpoints, spinning up new resources when needed.
Kubernetes Worker Node
Kubelet
The worker nodes include the kubelet
agent, which serves as the main interface between the worker node and the master node. Its responsibilities include:
Providing health updates about the worker node to the master.
Executing tasks and actions requested by the master node.
Kube-Proxy
The kube-proxy
is responsible for managing network traffic within the cluster. It ensures that internal and external traffic is routed correctly to services based on rules defined by:
Network policies from the
kube-controller-manager
.Custom controllers and configurations.
Container Runtime:
The software component required to run containers. Kubernetes supports multiple container runtimes like containerd, CRI-O, and Docker, which are compliant with the Container Runtime Interface (CRI)
Pods
Pod is a smallest building block in the k8s cluster
A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
Applications will be deployed as a pod in k8s, can create multiple pods for single application
K8s uses YAML to describe the desired state of the containers in a pod. This is also called a Pod Spec.
in pod manifest YAML we will configure our docker image
if pod is Damages/Crashed/Deleted, then k8s will create new pod (Self-healing)
if application running in multiple pods, then k8s will distribute the load to all running pods
pods count can be increased or decreased automatically based on load (Scalability)
Workflow of K8s:
When a user deploys an application:
The API Server receives the deployment request and stores it in the etcd database.
The Scheduler assigns the pods to appropriate nodes.
The Kubelet on each node ensures the assigned containers are running as defined.
The Controller Manager monitors the cluster state and ensures it matches the desired state.
This architecture makes Kubernetes powerful in automating deployment, scaling, and management. Let me know if you'd like to explore more about any specific component or feature!
for easy understandings:
API Server will receive the request given by kubectl, and it will store the request in ETCD (it is database for k8s cluster) with pending status
Schedular will identify pending requests available in ETCD, and it will identify worker node to schedule the task. (Schedular will identify worker node using kubelet)
kubelet is called as Node Agent, will maintain all the Worker Node information.
Kubeproxy will provide network for the cluster communication
Control Manger will verify all the tasks are working properly or not
In worker node, Docker Engine will be available to run docker containers.
In K8s, container will be created inside the pod. Pod is the smallest building block that we can create in k8s cluster
Inside pod, Docker container will be created. in k8s everything will be represented as Pod only
pod is a runtime process, we are going to deploy our application by pod’s in k8s.
Conclusion
This article aimed to provide a high-level overview of Kubernetes, and its Architecture hopefully sparking your curiosity and encouraging you to delve deeper into its fascinating ecosystem. If you're intrigued, I urge you to explore Kubernetes further and immerse yourself in the rich world of cloud-native technologies.
Remember, Kubernetes may seem complex at first, but don't let it intimidate you. Once you've deployed a few applications, it becomes much more straightforward and intuitive to use—especially for developers. So, embrace the challenge and enjoy the journey!
Before you leave
If you enjoy the content I share, feel free to connect with me on (14) Sai Prasad Annam | LinkedIn there’s a lot more to explore, and I think you’ll find it intriguing!"
And feel free check out my Kubernetes publication Kubernetes
Subscribe to my newsletter
Read articles from SAI PRASAD ANNAM directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

SAI PRASAD ANNAM
SAI PRASAD ANNAM
Hi there! I'm Sai Prasad Annam, an enthusiastic and aspiring DevOps engineer and Cloud engineer with a passion for integrating development and operations to create seamless, efficient, and automated workflows. I'm driven by the challenges of modern software development and am dedicated to continuous learning and improvement in the DevOps field.