Day 26 of 90 Days of DevOps Challenge: Advancing Container Management with K8s

Vaishnavi DVaishnavi D
3 min read

In my past blog, i completed my learning about Docker. from images, containers, volumes, and networks to orchestration with Docker Swarm, I’ve now taken the next big step in my DevOps journey: learning Kubernetes (K8s). This blog post marks the beginning of my Kubernetes learning and blogging series. Let’s explore how Kubernetes builds on Docker and why it’s a game-changer for container orchestration.

Transitioning from Docker to Kubernetes

While Docker allows us to package and run applications in containers efficiently, it lacks robust built-in features for managing multiple containers across distributed systems. That’s where Kubernetes comes in.

DockerKubernetes
Containerization platformOrchestration platform
Packages code + dependenciesManages and scales containers
Focused on single nodeManages clusters of nodes

So while Docker helps you create and run containers, Kubernetes helps you manage and scale them.

What is Kubernetes?

Kubernetes (K8s) is an open-source container orchestration platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It’s written in Go and provides a framework to run distributed systems resiliently.

At a high level, it helps with:

  • Deployment

  • Scaling

  • Load balancing

  • Self-healing

  • Container lifecycle management

Advantages of Kubernetes

  1. Auto Scaling: Automatically adjusts the number of running containers based on system load and usage.

  2. Load Balancing: Distributes network traffic evenly across all running containers.

  3. Self-Healing: Automatically replaces failed or crashed containers to maintain system health.

These features make Kubernetes ideal for high-availability, production-grade systems.

Kubernetes Architecture Overview

Kubernetes follows a cluster-based architecture made up of:

Control Plane (Master Node)

  • API Server: Receives and processes user requests (via kubectl)

  • Scheduler: Assigns tasks to suitable worker nodes

  • Controller Manager: Ensures the desired state of the system is maintained

  • ETCD: A distributed key-value store acting as the cluster’s internal database

Worker Nodes

  • Kubelet: The agent that runs on each worker node; reports node health and status

  • Kube Proxy: Manages network rules and enables communication across the cluster

  • Docker Runtime: The container engine (can also be containerd or CRI-O)

  • POD: The smallest deployable unit in Kubernetes, which contains one or more containers

Every container in Kubernetes is wrapped inside a Pod, and Pods are the smallest deployable units.

Kubernetes in Action

When deploying an app:

  1. We send a command using kubectl.

  2. The API Server receives it and stores it in ETCD with a pending status.

  3. The Scheduler picks it up and finds the best worker node.

  4. Kubelet on that node creates the Pod.

  5. Kube Proxy handles the networking part.

  6. The Controller Manager ensures the app is running as expected.

Final Thoughts

The transition from Docker to Kubernetes feels like stepping from a toolkit into an entire ecosystem. While Docker helps package and run your app, Kubernetes empowers you to scale, heal, and manage those applications at production scale.

Today was about understanding the why and how of Kubernetes, from architecture to the components involved in orchestration. In the next posts, I’ll be setting up my first cluster and deploying containers using YAML files.

Stay tuned as I dive deeper into Pods, Deployments, Services, and more

6
Subscribe to my newsletter

Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vaishnavi D
Vaishnavi D