Kubernetes Architecture Explanation
Introduction
Kubernetes is an open sourced platform designed to automate deploying, scaling and operating application containers. It is also abbreviated as k8s. It groups various containers into logical units for easy management and discovery. It provides various features like automated rollout, rollbacks and self-healing.
Kubernetes Architecture Overview
The architecture of Kubernetes is based on master-worker node model where in each cluster must have one or more master (Control) nodes and multiple worker nodes. Each node has several components under it which plays a specific role, lets understand usage of Control node, Worker node and component under these nodes.
Note: We also call master node as control node.
Master/Control Plane Components:
The master or control node is responsible for managing the Kubernetes cluster. It consists of several key components:
Here in below image we can see all components present in Control node.
Let's understand the work of each components of Control Node.
Kube-API Server
etcd
kube-controller Manager
kube-scheduler
cloud-controller manager
Worker Node Components
Below we have architecture of worker nodes and components available under it.
Let's understand the work of each components of Worker Node.
Kubelet
Container Runtime
kube-proxy
Understanding Kubernetes Architecture by Deploying a Web Application
Here we are creating a simple Deployment using nginx image to understand the architecture.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginx
ports:
- containerPort: 80
Now applying the above deployment.
kubectl apply -f deployment.yaml
As soon as we apply the above deployment, Deployment specification will be send to API server. And then below process will happens.
The API server will first validate the YAML configuration.
After validation it will store the configuration in etcd.
After storing the data in etcd, API server informs Scheduler to check the best suitable node for the pods and also informs kube-controller manager to monitor the desired state of pods.
Once Scheduler get the best nodes available for the pod, it return back the information to API server.
Now API server will communicate with kubelet of the selected worker node and ask him to create pods on that node.
kubelet of the node will now assign the task of running the container to container runtime and the ask kube-proxy to route the traffic to particular pod based on service defined.
Once container runtime pulls the required image and it runs the container successfully and also kube-proxy have routed the traffic to pods, they both get back the reports to kubelet and then kubelet get back the report to API server.
After getting information from kubelet, API server assign the work of monitoring and maintain desired state of pods to kube-controller manager.
At the end of all these task API server report back to user who requested the creation of pods.
That's the end of architecture of Kubernetes.
Subscribe to my newsletter
Read articles from Gaurav Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Gaurav Kumar
Gaurav Kumar
I am working as a full time DevOps Engineer at Tata Consultancy Services from past 2.7 yrs, I have very good experience of containerization tools Docker, Kubernetes, OpenShift. I have good experience of using Ansible, Terraform and others.