Understanding ReplicationController in Kubernetes


In Kubernetes, A Pod is temporary in nature. If a Pod crashes or the Node it's on goes down, your app might stop working. That's where the ReplicationController (RC) comes in it acts like a bodyguard for your Pods, making sure the right number of them are always running.
1. What Is a ReplicationController?
A ReplicationController is a Kubernetes object that ensures a fixed number of identical Pods are running at all times.
Self-healing: If a Pod crashes or is deleted, RC creates a new one.
No extras allowed: If someone manually creates extra Pods with the same labels, RC deletes them to maintain the correct count.
Label-based tracking: RC doesn’t care about Pod names it uses labels to find and manage Pods.
2. Desired vs Observed State
Desired state: How many Pods you want (e.g., 3).
Observed state: How many Pods are currently running with the correct labels.
RC constantly compares these two and adjusts the number of Pods to match your desired state.
3. How ReplicationController Works
The RC runs a continuous control loop to keep your desired number of Pods alive. Here's how it does that:
1. Watches All Pods
RC constantly monitors all Pods in the cluster.
It doesn’t look at Pod names—it focuses on labels.
2. Filters by Label Selector
RC uses the
spec.selector
to find Pods with matching labels.Only Pods with these labels are considered part of its group.
3. Counts Matching Pods
RC checks how many Pods match the selector.
If the count is less than desired:
RC creates new Pods using the
spec.template
.If the count is more than desired:
RC deletes extra Pods to bring the number back down.
4. Adopts Orphan Pods
If there are Pods with matching labels but no controller managing them:
RC can adopt these Pods.
This means they now count toward the replica target.
4. ReplicationController (RC) YAML
apiVersion: v1
kind: ReplicationController
metadata:
name: javawebapprc
namespace: test-ns
spec:
replicas: 2
selector:
app: javawebapp
template:
metadata:
labels:
app: javawebapp
spec:
containers:
- name: javawebapprccon
image: kkeducationb2/java-webapp:1.1
ports:
- containerPort: 8080
5. Key Concepts
1. Two spec
sections
Outer spec
→ belongs to the ReplicationController
Controls how many Pods to run (
replicas
)This is where the RC's behavior is defined
replicas: 2
You want 2 Pods running at all times.
Defines how to select Pods (
selector
)Provides the template for creating Pods
Inner spec
→ belongs to the Pod template
Describes what each Pod should look like
Includes container details like image, name, and ports
2. Why selector
Is Crucial
selector:
app: javawebapp
This tells RC: Only manage Pods with label
app=javawebapp
.RC does not track Pod names only labels.
If your Pods don’t have this label, RC won’t manage them.
3. Common Mistake: Missing Labels in Pod Template
When defining a ReplicationController (RC), one of the most frequent mistakes is forgetting to add labels inside the Pod template.
What People Often Do Wrong
template:
metadata:
# Missing labels here!
- This causes a serious issue: the RC creates Pods, but doesn’t recognize them because it uses labels, not names, to track and manage Pods.
4. Correct Way to Define Labels
- Always include matching labels inside the Pod template:
template:
metadata:
labels:
app: javawebapp
This ensures:
RC can find and manage the Pods it creates.
The number of Pods stays at the desired count.
Self-healing works correctly when Pods crash or are deleted.
5. Setting metadata.name
Inside Pod Template
metadata:
name: javawebapprc
This is not recommended.
RC will ignore this name and generate Pod names like
javawebapprc-abc12
.If you set a name manually, it can cause confusion or errors.
6. Working with ReplicationControllers
1. Create and Inspect
kubectl apply -f rc.yaml
kubectl get all -n test-ns
kubectl get rc
kubectl get all -n test-ns
kubectl get all
Shows all common resources in the cluster.-n test-ns
Only inside the namespacetest-ns
.
These are two running Pods.
The names start with
javawebapprc
→ because they are created by the ReplicationController (RC) namedjavawebapprc
.The ending part (
-44v9d
,-8b5fp
) is a random string auto-generated by Kubernetes to make each Pod name unique.So both Pods belong to the same RC but have different suffixes.
kubectl get pods -l app=javawebapp --show-labels -n test-ns
kubectl get pods
List all Pods.-l app=javawebapp
filter Pods that have labelapp=javawebapp
.--show-labels
also show which labels are attached to each Pod.-n test-ns
look inside the namespace test-ns.
2. Watch Self-Healing
kubectl delete pod <pod-name>
a) Before deletion
You had 2 Pods running:
javawebapprc-44v9d (Running)
javawebapprc-8b5fp (Running)
Both maintained by the RC javawebapprc
.
b) You deleted one Pod
kubectl delete pod javawebapprc-44v9d -n test-ns
That Pod was force removed from the cluster.
But the RC still wants 2 Pods running (DESIRED = 2).
Now it sees only 1 Pod left (
javawebapprc-8b5fp
).
c) RC auto-created a new Pod
After a few seconds, the RC started a new Pod:
javawebapprc-dpkn2 (Running)
Notice the name:
Still starts with
javawebapprc
(because RC created it).Ends with a new random suffix (
-dpkn2
).
So again, you have 2 Pods total:
javawebapprc-8b5fp (old one, still running)
javawebapprc-dpkn2 (newly created replacement)
3. See Who Owns a Pod
Shows detailed info about the Pod inside the
test-ns
namespace.grep -i 'Controlled By'
Filters only the line that tells you which higher-level object is controlling the Pod.
kubectl describe pod (pod-name) -n test-ns | grep -i 'Controlled By'
4. Scaling Pods (Best Practice)
a) Initial State
kubectl get all -n test-ns
RC (
javawebapprc
) had 2 replicas (DESIRED = 2, CURRENT = 2, READY = 2).Two pods were running:
javawebapprc-b2lcp
javawebapprc-dpkn2
b) Scaling the RC
kubectl scale rc javawebapprc --replicas=3 -n test-ns
You told Kubernetes: " now i want 3 Pods instead of 2."
The RC controller compared desired (3) vs current (2).
Since it was short of 1 Pod, RC created one new Pod automatically.
c) Final State
kubectl get all -n test-ns
DESIRED = 3, CURRENT = 3, READY = 3
Three pods are running:
javawebapprc-b2lcp
javawebapprc-dpkn2
javawebapprc-z28cg
(newly created)
To scale down for maintenance:
kubectl scale rc javawebapprc --replicas=0 -n test-ns
You can scale based on situations when needed.
5. Deleting RC (What Happens to Pods?)
Delete RC and Pods
kubectl delete rc javawebapprc -n test-ns
You deleted the ReplicationController named
javawebapprc
in the namespacetest-ns
.Kubernetes response: replicationcontroller "javawebapprc" deleted
What Happened to the Pods?
By default, when you delete an RC, the Pods it manages are also deleted.
After deletion, you checked:
kubectl get all -n test-ns
and the output was: No resources found in test-ns namespace.
This means both the RC and all Pods it controlled were removed.
Important Note
- If you want to delete the RC but keep the Pods, you can use the
--cascade=orphan
flag:
kubectl delete rc javawebapprc -n test-ns --cascade=orphan
In that case, the RC is deleted but the Pods remain running (not controlled anymore).
7. Summary
ReplicationController (RC) ensures your application is always running with the desired number of Pods.
It continuously watches Pods and takes action if any Pod crashes, is deleted, or extra Pods exist.
RC provides self-healing and scaling for workloads, keeping apps reliable.
Although RC is older and mostly replaced by ReplicaSet, it is still important to understand as a fundamental Kubernetes concept.
Subscribe to my newsletter
Read articles from Kandlagunta Venkata Siva Niranjan Reddy directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
