Module 5: Deployments & Services

๐น Introduction
We explored Pods and ReplicaSets. While these are foundational concepts, real-world Kubernetes applications rarely use Pods or ReplicaSets directly. Instead, we rely on Deployments to manage them, and Services to make our applications accessible.
This module will cover:
What is a Deployment?
Benefits of using Deployments over Pods/ReplicaSets
Rolling updates & rollback with Deployments
What is a Service in Kubernetes?
Different types of Services (ClusterIP, NodePort, LoadBalancer)
Hands-on example: Deploying an Nginx app and exposing it with a Service
๐น Deployments in Kubernetes
A Deployment is a higher-level abstraction in Kubernetes that:
Manages ReplicaSets automatically.
Ensures the desired state of Pods (self-healing).
Provides rolling updates and rollbacks.
Makes scaling applications easier.
Why not just use Pods or ReplicaSets?
Pods are ephemeral (they die if a node fails).
ReplicaSets provide scaling but lack deployment strategies.
Deployments combine both โ scaling, self-healing, rolling updates, and rollback.
Example: Nginx Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Explanation:
Runs 3 replicas of the Nginx Pod.
Ensures if one Pod fails, another is created.
Can be updated easily with a new image version.
Scaling Deployments
kubectl scale deployment nginx-deployment --replicas=5
Now, Kubernetes will automatically adjust the number of Pods to 5.
Rolling Updates
If you update the Deployment to use nginx:1.25
, Kubernetes will:
Gradually replace old Pods with new ones.
Ensure zero downtime.
Rollback if needed:
kubectl rollout undo deployment nginx-deployment
๐น Services in Kubernetes
Deployments create Pods, but Pods get dynamic IPs that change if a Pod restarts. ๐ Thatโs where Services come in!
A Service is a stable network endpoint that routes traffic to a set of Pods.
Types of Services:
ClusterIP (default) โ Exposes the service inside the cluster only.
NodePort โ Exposes the service on a static port on each Node.
LoadBalancer โ Creates a cloud load balancer (e.g., AWS ELB, GCP LB).
Example: Exposing Nginx with a Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Explanation:
Targets Pods with label
app=nginx
.Exposes port 80 inside the cluster.
NodePort assigns a static port (e.g., 30007) on each Node.
Accessing the App
Inside cluster:
curl
http://nginx-service
Outside cluster (NodePort):
http://<NodeIP>:<NodePort>
On Cloud (LoadBalancer): Public LoadBalancer IP is provided.
๐น Summary
Deployments = Best way to run production apps (scaling, rolling updates, rollback).
Services = Stable network endpoint for Pods.
Together, they form the backbone of Kubernetes workloads.
Subscribe to my newsletter
Read articles from DevOpsLaunchpad directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
