Day 34 of 90 Days of DevOps Challenge: Diving into ReplicaSets in Kubernetes


Yesterday, I explored Kubernetes Deployments, a higher-level abstraction that manages ReplicaSets and Pods. I learned how Deployments ensure zero-downtime through rolling updates, allow version control with rollbacks, and support effortless scaling. It was my first experience seeing Kubernetes take full control of application lifecycle management, and it was impressive!
Today, I zoomed in on a key component that works under the hood of Deployments: the ReplicaSet. While Deployments provide automation and convenience, it’s the ReplicaSet that manages the Pod lifecycle, ensuring a specific number of Pods are running at all times. Understanding this is crucial before moving on to advanced controllers like StatefulSets or DaemonSets.
Why Not Use Plain Pods?
Up to this point, I created Pods directly using YAML files (kind: Pod
). But this approach comes with a serious drawback:
- If a Pod is deleted or crashes, Kubernetes does not recreate it.
This means:
Application downtime
No self-healing
Zero high availability
NOTE: Directly creating Pods is fine for learning, but it’s not recommended in production environments.
Enter ReplicaSets
A ReplicaSet is a Kubernetes resource that creates and manages Pods. It ensures:
A defined number of Pods are always running (
replicas
)Lost or crashed Pods are automatically recreated
Manual scaling (up or down) of application Pods
By doing this, ReplicaSets provide high availability, resilience, and basic scalability for applications.
Sample ReplicaSet + Service YAML
Here’s a practical example I used today:
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebrs
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebpod
labels:
app: javawebapp
spec:
containers:
- name: javawebcontainer
image: zerotoroot/javawebapp
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: zerotoroot-ns
spec:
type: LoadBalancer
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
This creates:
A ReplicaSet (
javawebrs
) managing 2 Pods with the labelapp: javawebapp
A LoadBalancer Service to expose those Pods
Commands I Practiced
# Apply the ReplicaSet and Service
$ kubectl apply -f replicaset.yaml
# Get all resources
$ kubectl get all
# See running pods
$ kubectl get pods
# View ReplicaSets
$ kubectl get rs
# Delete a Pod to test auto-recovery
$ kubectl delete pod <pod-name>
$ kubectl get pods # New pod will be created automatically
# Manually scale the ReplicaSet
$ kubectl scale rs javawebrs --replicas=3
# Delete the ReplicaSet (which deletes its Pods)
$ kubectl delete rs javawebrs
Key Takeaways:
Self-Healing: If a Pod is deleted, the ReplicaSet automatically recreates it.
Manual Scaling: You can increase or decrease replicas using
kubectl scale
.Pod Ownership: To delete Pods managed by a ReplicaSet, you must delete the ReplicaSet itself.
Not Auto-Scaling: Unlike Deployments, ReplicaSets don’t support auto-scaling or rollbacks.
NOTE: Want features like automated rollbacks or scaling based on load? Use Deployments, which manage ReplicaSets for you.
ReplicaSet vs Deployment
Feature | ReplicaSet | Deployment |
Self-healing Pods | ✅ | ✅ |
Manual Scaling | ✅ | ✅ |
Rolling Updates | ❌ | ✅ |
Rollbacks | ❌ | ✅ |
Auto Scaling | ❌ | ✅ (with HPA) |
Final Thoughts
Understanding ReplicaSets gave me a solid foundation in Kubernetes resource management. It's the engine that keeps Pods alive and ensures high availability. While Deployments offer more automation, knowing how ReplicaSets work is essential to troubleshoot, scale manually, or even build custom controllers.
Next up, I’ll explore StatefulSets, which are designed for stateful applications like databases, where stable network identities and persistent storage matter.
Stay tuned for more Kubernetes magic!
Subscribe to my newsletter
Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
