Day 27 of 90 Days of DevOps Challenge: Understanding Kubernetes Cluster Setup, Pods, and Services


After understanding Kubernetes architecture yesterday, I wanted to explore how Kubernetes clusters are set up, and how core resources like Pods and Services work in a real-world scenario. Here’s a breakdown of what I learned today:
Types of Kubernetes Cluster Setups
While exploring how to get started with a Kubernetes cluster, I came across three main types of cluster setups. Each is suited to different use cases and environments:
1. Minikube – Single-node Cluster for Practice
Minikube is an easy-to-install tool that helps spin up a single-node Kubernetes cluster locally on your system.
Why Minikube?
Perfect for learning, testing, and prototyping.
Designed for developers and beginners to try out Kubernetes without the need for a complex setup.
It runs inside a virtual machine or container on your laptop and includes all the necessary Kubernetes components (API Server, Scheduler, Controller Manager, etc.).
Use Case:
Practicing Kubernetes commands
Testing YAML manifests
Simulating workloads in a dev environment
NOTE: Not recommended for production or scaling beyond basic use cases.
2. Kubeadm Cluster – Self-Managed Cluster
Kubeadm is a tool provided by the Kubernetes community that allows you to bootstrap your own Kubernetes cluster from scratch.
Why Kubeadm?
Gives you complete control over your cluster setup and configuration.
Great for understanding what happens under the hood in Kubernetes.
You manually configure the control plane, worker nodes, and other core components.
What You Manage:
API Server
Networking (like CNI plugins)
TLS certificates & RBAC
Cluster scaling
Maintenance, upgrades, and monitoring
Use Case:
Learning how Kubernetes clusters operate in-depth
Custom, private Kubernetes setups for advanced use cases
NOTE: Self-management means you’re responsible for everything. including stability, updates, and security. It’s a great learning experience but requires more operational effort.
3. Provider-Managed Clusters – Production-Ready, Cloud-Hosted
For real-world production applications, most organizations use cloud-managed Kubernetes services offered by major cloud providers. These services offer high availability, built-in scalability, and operational simplicity.
Common Managed Kubernetes Services:
Amazon EKS (Elastic Kubernetes Service) – AWS
Azure AKS (Azure Kubernetes Service) – Microsoft Azure
Google GKE (Google Kubernetes Engine) – Google Cloud
Why Managed Clusters?
Providers handle the control plane management: provisioning, upgrades, fault-tolerance, and security patches.
Easily integrate with other cloud-native services (like IAM, logging, auto-scaling, etc.)
Support for autoscaling, monitoring, and load balancing out of the box.
Use Case:
Deploying large-scale applications in production
Enterprise-grade solutions with robust reliability and scalability
Teams looking to focus on workloads rather than infrastructure management
NOTE: These services are paid, but well worth it when you're running serious production workloads that demand uptime, speed, and support.
Kubernetes Resources Overview
Today, I also explored some of the most common Kubernetes objects and their roles in cluster operations:
Pods
Services (ClusterIP, NodePort, LoadBalancer)
Namespaces
ReplicationController / ReplicaSet
Deployments
DaemonSets
StatefulSets
Ingress Controllers
Horizontal Pod Autoscaler (HPA)
Helm Charts
Monitoring using Grafana & Prometheus
EFK Stack for application log monitoring
What is a POD in Kubernetes?
The Pod is the smallest deployable unit in Kubernetes. Here’s what I understood:
It represents a single instance of a running process.
Pods are used to deploy containers (like Docker containers) in K8s.
A single application can run across multiple pods for redundancy and scalability.
We define Pods using YAML manifests where we declare container images and configurations.
What really stood out to me is Kubernetes’ self-healing capability:
If a pod crashes, Kubernetes automatically creates a new one.
Also, when multiple pods are running, K8s distributes traffic across them, which means it has built-in load balancing and supports auto-scaling based on real-time demand.
Kubernetes Services
Since pods are temporary, their IPs keep changing. Accessing them directly isn’t practical.
That’s where Kubernetes Services come into play. Services give stable access points to pods.
Types of Services:
ClusterIP
The default type of service.
Provides internal communication between pods inside the cluster.
Good for microservices that talk to each other behind the scenes.
NodePort
Exposes the service on a static port on each worker node.
Can be accessed externally via
<NodeIP>:<NodePort>
.
LoadBalancer
Provisions an external load balancer (supported by cloud providers).
Best suited for production environments where external access is required.
Key Point about ClusterIP:
Since Pod IPs are ephemeral, ClusterIP binds multiple pods under one stable virtual IP. It ensures that even when pods restart or move, services can still reach them.
Final Thoughts
This was a knowledge-heavy but very insightful day. Understanding how Kubernetes clusters are set up and how pods and services operate gave me a solid foundation for deploying and managing containerized applications in a scalable and reliable way. Next, I plan to explore Deployments, ReplicaSets, and Autoscaling can’t wait to get hands-on with YAML and kubectl even more!
Thanks for reading and as always, let’s keep learning, building, and deploying!
Subscribe to my newsletter
Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
