Kubernetes Architecture Explanation

Gaurav KumarGaurav Kumar
5 min read

Introduction

Kubernetes is an open sourced platform designed to automate deploying, scaling and operating application containers. It is also abbreviated as k8s. It groups various containers into logical units for easy management and discovery. It provides various features like automated rollout, rollbacks and self-healing.

Kubernetes Architecture Overview

The architecture of Kubernetes is based on master-worker node model where in each cluster must have one or more master (Control) nodes and multiple worker nodes. Each node has several components under it which plays a specific role, lets understand usage of Control node, Worker node and component under these nodes.

Note: We also call master node as control node.

K8s Architectural Overview

Master/Control Plane Components:

The master or control node is responsible for managing the Kubernetes cluster. It consists of several key components:

Here in below image we can see all components present in Control node.

K8s Architecture - Control Plane

Let's understand the work of each components of Control Node.

Kube-API Server
Kube-API Server is the front end for the Kubernetes control plane. All communication between the components in the cluster goes through the API server. Whenever we request to create any pods or any other request via command line it first goes to API server and then it assign the work to other components.
etcd
etcd is a distributed key-value store used to store all cluster related data. Whenever we will create any pods, service, or any Kubernetes components, etcd is responsible for storing those data. Suppose If we scale our Deployment from 3 to 5 replicas, this change is stored in etcd. etcd provides reliable data storage and ensures data consistency across cluster.
kube-controller Manager
The kube-controller manager runs controllers that handles routine task in the cluster. Suppose if we have Deployment with 3 replicas of our application. The kube-controller manager watches the desired state in etcd and ensures there are always 3 running pods. If a pod fails, the controller creates a new one to replace it, maintaining the desired state.
kube-scheduler
The work of kube-scheduler is to assign pods to nodes based on resource requirements and constraints. Suppose we requested a pod to be created as part of our Deployment, the scheduler check the resource requirement in pods and then check the best suitable nodes for it. it considers factors like available resources, node affinity, taints and toleration to make the best decision.
cloud-controller manager
The cloud-controller manager is a component that helps Kubernetes work with cloud provider services like load balancers, virtual machines and storage. It ensures that Kubernetes can use these cloud-specific resources smoothly. Suppose we want to use Amazon EBS (Elastic Block Store) volume as a persistent volume in Kubernetes, it works because of cloud controller manager.

Worker Node Components

Below we have architecture of worker nodes and components available under it.

K8s architecture - worker node

Let's understand the work of each components of Worker Node.

Kubelet
We can call kubelet as a captain of worker nodes. When kubeScheduler decides and assign the pod to worker node then API server communicate with kubelet of that worker node and kubelet as a captain of that worker node takes over from that and now its kubelet duty to get the pods running on that node by communicating with container runtime. After the creation of pods kubelet reports its status back to API server.
Container Runtime
The container runtime runs containers. We commonly use Docker or containerd as a container runtimes. When kubelet get instruction from API server to run the pod, it communicates with container runtime and assign the work of pods creation to container runtime. The container runtime pulls the required image from container registry and runs the container.
kube-proxy
kube-proxy maintains network rules and performs load balancing. Kube-proxy ensures that network traffic is correctly routed to our pods. If we create a Service to expose our web application, kube-proxy sets up the necessary rules to route requests from the Service to one of our running pods.

Understanding Kubernetes Architecture by Deploying a Web Application

Here we are creating a simple Deployment using nginx image to understand the architecture.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: nginx
        ports:
        - containerPort: 80

Now applying the above deployment.

kubectl apply -f deployment.yaml

As soon as we apply the above deployment, Deployment specification will be send to API server. And then below process will happens.

  • The API server will first validate the YAML configuration.

  • After validation it will store the configuration in etcd.

  • After storing the data in etcd, API server informs Scheduler to check the best suitable node for the pods and also informs kube-controller manager to monitor the desired state of pods.

  • Once Scheduler get the best nodes available for the pod, it return back the information to API server.

  • Now API server will communicate with kubelet of the selected worker node and ask him to create pods on that node.

  • kubelet of the node will now assign the task of running the container to container runtime and the ask kube-proxy to route the traffic to particular pod based on service defined.

  • Once container runtime pulls the required image and it runs the container successfully and also kube-proxy have routed the traffic to pods, they both get back the reports to kubelet and then kubelet get back the report to API server.

  • After getting information from kubelet, API server assign the work of monitoring and maintain desired state of pods to kube-controller manager.

  • At the end of all these task API server report back to user who requested the creation of pods.

That's the end of architecture of Kubernetes.

0
Subscribe to my newsletter

Read articles from Gaurav Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gaurav Kumar
Gaurav Kumar

I am working as a full time DevOps Engineer at Tata Consultancy Services from past 2.7 yrs, I have very good experience of containerization tools Docker, Kubernetes, OpenShift. I have good experience of using Ansible, Terraform and others.