Exploring Kubernetes: Learn About Replica Controller, ReplicaSet, and Deployment


Welcome to our deep dive into the world of Kubernetes' replication mechanisms: Replica Controller, ReplicaSet, and Deployment. If you're managing applications in a Kubernetes environment, understanding these concepts is crucial for ensuring your services are scalable, resilient, and maintain high availability. Each of these Kubernetes objects plays a unique role in managing pod replicas, but they also build upon each other, offering increasingly sophisticated features for application management. In this blog, we'll explore how each of these works, their differences, and when to use one over the others, helping you make informed decisions for your Kubernetes deployments.
Replication Controller
We will take a look at the Replication controller which is a legacy API for managing workloads that can scale horizontally. A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.
The Replication controller ensures that the right number of pods are always running. If there are too many, it removes the extras; if there are too few, it creates more. Unlike manually created pods, those managed by a ReplicationController are automatically restarted if they fail, get deleted, or shut down.
For example, if a node undergoes maintenance (like a kernel upgrade), the ReplicationController will recreate the pods on another node. Even if your application only needs a single pod, using a ReplicationController ensures it stays available.
Think of it like a process supervisor, but instead of managing individual processes on one machine, it manages multiple pods across different nodes.
Example of Replication Controller
This YAML file is setting up a ReplicationController in Kubernetes, which is like a manager for your pods. It's named 'nginx' and its job is to keep three copies of an nginx web server running at all times. Each of these copies, or pods, runs a container from the nginx Docker image and listens on port 80 for web traffic. The ReplicationController knows which pods to manage because they all have a label 'app: nginx'. If one of these pods goes down, the ReplicationController will spin up a new one to keep the count at three, ensuring your web service stays up and running smoothly.
We go ahead to apply the replication controller configuration in the yaml file with the command below, this would go ahead to create the replication controller.
kubectl apply -f nginx.yaml
To verify that the replication controller was created and successfully replicated the nginx pod, we can check the number of running pods.
ReplicaSet
As Kubernetes evolved, it introduced ReplicaSet as an improved version of the ReplicationController. While both serve the same core purpose ensuring a specified number of pod replicas are running at all times. ReplicaSet offers more flexibility and better integration with modern Kubernetes features.
Unlike ReplicationController, which only supports exact label matches, ReplicaSet allows set-based selectors, making it easier to group and manage pods dynamically. More importantly, ReplicaSet is the foundation for Deployments, which are now the preferred way to manage workloads in Kubernetes.
In this section, we’ll explore how ReplicaSet works, its key advantages over ReplicationController, and how you can use it effectively in your Kubernetes environment.
Example of ReplicaSet
This YAML file is setting up a ReplicaSet in Kubernetes for the frontend part of a guestbook app. It's called 'frontend' and its job is to keep three instances of this frontend running at all times. Each instance runs in a pod with a container using the 'gb-frontend:v5' image, which is probably a custom image for this app's frontend. The ReplicaSet knows which pods to manage because they're all tagged with the label 'tier: frontend'. If one of these pods goes down, the ReplicaSet will start a new one to keep the count at three, ensuring your app's frontend stays available and can handle traffic smoothly.
We proceed to apply the ReplicaSet configuration in the YAML file using the command below. This will create the ReplicaSet.
To verify that the ReplicaSet was created and successfully replicated the frontend pod, we can check the number of running pods.
We can go ahead and modify the ReplicaSet configuration by increasing the replicas from three to four. This change will ensure that we have four pods running at all times.
We can also update the replicas from the command line. We don't need to modify the ReplicaSet configuration YAML everytime we want to scale the number of pods up or down. The command would increase the number of pods from four to six.
kubectl scale --replicas=6 replicaset/frontend
Deployment
After exploring ReplicationController and ReplicaSet, it's time to look at a more powerful and flexible way to manage workloads in Kubernetes. A Deployment manages a set of Pods for running stateless application workloads and provides declarative updates for both Pods and ReplicaSets. You just describe the desired state, and the Deployment Controller works behind the scenes to adjust the actual state to match it, rolling out changes gradually and reliably.
Deployments can create new ReplicaSets, scale them, update them, or even replace old ones by adopting their resources. This makes it the go-to tool for handling updates, rollbacks, and scaling in a clean, controlled way.
Example of Deployment
This YAML configuration defines a Deployment named nginx
for managing a backend tier of an application. It ensures that three instances of an nginx pod are running at all times, each using the nginx version 1.23.0
Docker image and exposing port 80. The Deployment uses label selectors to manage these pods, ensuring they are labeled with app: v1
. This setup is particularly useful for applications where you want to manage updates and scaling in a controlled manner. Deployments provide features like rolling updates, where you can update the application to a new version with minimal downtime, and rollbacks if something goes wrong. If a pod fails or is terminated, the Deployment will automatically manage the creation of a new pod to maintain the desired number of replicas, ensuring high availability and load balancing for the backend service.
We proceed to check the running pods to confirm that our deployment was successfully created and the pods are operational. This deployment creates three pods because the number of replicas is set to three.
kubectl get pods
We will update the image in the deployment using the command line from nginx:1.23.0
to nginx:1.23.4
. Sometimes, it's faster and more convenient to use the CLI for quick changes, especially during development or testing.
kubectl set image deployment/nginx nginx=nginx:1.23.4
This change can be verified by getting detailed information about a specific running pod. This information includes its configuration, current status, and recent events, which are helpful for debugging and understanding how the pod is running.
In Kubernetes, rollouts are the process of gradually updating your application to a new version without downtime. When you make changes to a Deployment such as updating the container image or scaling replicas Kubernetes doesn’t just replace everything at once. Instead, it performs a rolling update, where old pods are replaced with new ones in a controlled, step by step manner. This ensures that the application remains available throughout the update. To keep track of these changes, you can use annotations like kubernetes.io/change-cause to document the reason for a rollout. This becomes especially useful when reviewing rollout history using kubectl rollout history, as it gives context to each change. In this way, Kubernetes not only automates safe updates but also makes them traceable and easier to manage.
kubectl annotate deployment nginx kubernetes.io/change-cause="Pick up patch version"
We currently have the nginx image version set to nginx:1.23.4
after our update. We can roll back the nginx deployment to a previous version, specifically revision 1. This is possible because Kubernetes keeps a history of Deployment revisions. The revision could involve a change in the replica count or even a different image, but in this rollback, it is specifically for the nginx version.
kubectl rollout undo deployment/nginx --to-revision=1
We can confirm that the rollback was successful by checking the details of a specific running pod, which should have the nginx image version set to 1.23.0
.
In conclusion, understanding ReplicationController, ReplicaSet, and Deployment is key to managing applications effectively. ReplicationController ensures basic pod availability, ReplicaSet adds flexibility with smarter pod selection, and Deployment brings it all together with powerful features for scaling, updates, and rollbacks. Together, they help you build scalable, resilient, and reliable workloads in Kubernetes.
Subscribe to my newsletter
Read articles from Obinna Iheanacho directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Obinna Iheanacho
Obinna Iheanacho
DevOps Engineer with a proven track record of streamlining software development and delivery processes. Skilled in automation, configuration management, and continuous integration and delivery (CI/CD), with expertise in cloud infrastructure and containerization technologies. Possess strong communication and collaboration skills, able to work effectively across development, operations, and business teams to achieve common goals. Dedicated to staying current with the latest technologies and tools in the DevOps field to drive continuous improvement and innovation.