Understanding How ReplicaSets Protect Every Pod in Kubernetes

We all started by memorizing kubectl create -f some-random.yaml
and hoped the pods would magically run. But somewhere deep inside, there was this itch: who is making sure things don’t fall apart when a pod dies?
Let’s talk about the unsung hero of Kubernetes reliability: the ReplicaSet.
Who’s Watching Everything? Controllers.
Kubernetes isn’t just a scheduler. It’s more like a nervous system. The controller is its brain.
It doesn’t just launch containers—it constantly checks the desired state vs the actual state. If you said “I want 3 pods,” and only 2 are running, a controller will freak out and make it 3 again.
It’s not reacting. It’s observing + correcting, forever.
What’s a Replica, Really?
Imagine you’re running a gym. You have 1 trainer. He gets sick. Now all clients are frustrated.
Now imagine you have 3 trainers trained the same way. Even if 1 leaves, the system doesn’t collapse.
That’s what a replica is: a backup that works exactly like the original. High availability, plain and simple.
Enter: ReplicationController (RC)
It was Kubernetes’ first attempt at making sure you had N number of pods running—always. Even if there’s just 1 pod in it, RC will recreate it if it crashes.
apiVersion: v1
kind: ReplicationController
metadata:
name: app-rc
spec:
replicas: 6
template:
metadata:
name: myapp-pod
labels:
app: my-app
type: backend
spec:
containers:
- name: nginx-container
image: nginx
✅ Even if you already manually created the pods, the template is needed because RC doesn’t use the past—it uses the desired future to create pods if they vanish. That’s why template is non-negotiable.
But ReplicaSet is the New RC
ReplicationController was good. But ReplicaSet is smarter. It introduced selectors that are more expressive.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: app-rs
spec:
replicas: 6
selector:
matchLabels:
app: my-app
type: backend
template:
metadata:
labels:
app: my-app
type: backend
spec:
containers:
- name: nginx-container
image: nginx
⚠️ If you use
apiVersion: v1
instead ofapps/v1
, it’ll throw an error likeno matches for kind "ReplicaSet" in version "v1"
—because RS lives in theapps
API group.
Wait—How Does RS Know Which Pods to Watch?
Imagine 500 pods running across a cluster. If you don’t tag your 3 backend pods properly, how will RS know what to watch?
That’s where labels and selectors come in.
labels
= the identity stamp you assign to pods.selector
= the filter RS uses to find them.
If the label and selector match, RS treats them like its children.
So when a pod with label app: my-app
dies, RS knows: “That was mine. I must recreate it.”
It’s like being a parent at a fair. You look for the kid with the red cap. You don’t yell every kid’s name—you look for your selector.
Scaling the Right Way
You start with 6 pods. But Black Friday hits. You want 12. Two ways:
1. Edit YAML + Replace
# Change replicas: 6 to replicas: 12
kubectl replace -f rs-definition.yaml
✅ Use when you want to change other configs too, like image version, labels, etc.
2. Scale on the Fly
kubectl scale --replicas=12 rs app-rs
✅ Use when you just want more or fewer pods, fast. Great for dynamic traffic spikes.
TL;DR
ReplicaSet is that reliable background process making sure your app doesn’t die silently. It monitors your pods like a hawk and corrects the count without asking you.
Forget about blindly writing YAML. Start understanding why each line matters. That’s how you master Kubernetes.
All Commands
# Create RC
kubectl create -f rc-definition.yaml
# Create RS
kubectl create -f rs-definition.yaml
# Check pods
kubectl get pods
# Check RS
kubectl get rs
# Scale RS (fast)
kubectl scale --replicas=12 rs app-rs
# Replace RS (clean)
kubectl replace -f rs-definition.yaml
Subscribe to my newsletter
Read articles from Vijay Belwal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
