"Mastering Kubernetes Deployments: A Step-by-Step Guide for Beginners To Advance level"

Shaik MustafaShaik Mustafa
6 min read

Definition:

A Deployment in Kubernetes is a resource object used to manage the deployment, scaling, and updates of containerized applications. It simplifies application lifecycle management by defining a desired state, and Kubernetes ensures the system reaches and maintains that state.

Why We Need Deployment:

In Kubernetes, Pods are temporary and can be replaced for various reasons, such as scaling, node failures, or even pod crashes. This means that managing Pods manually can be cumbersome, especially when you need to ensure the application is always available.

This is where Deployments come into play. Think of a Deployment as a "manager" that oversees the lifecycle of your application’s Pods. It makes sure that:

  1. The right number of Pods are running: For instance, if your app needs 3 Pods to handle the load, a Deployment ensures there are always 3 Pods running. If one crashes, it quickly replaces it.

  2. Upgrades happen smoothly: When you release a new version of your app, a Deployment allows you to update Pods without any downtime. It does this by updating Pods one by one, ensuring there’s always an instance of the app available.

  3. Failures are handled automatically: If a Pod fails, the Deployment automatically creates a new one to replace it, ensuring the application stays up and running.

Why Deployments Are Powerful

Using deployments in Kubernetes ensures your application is:

  • Always available and resilient.

  • Easy to update and maintain.

  • Scalable based on user demand.

Some points to remember:

  • The best part of Deployment is we can do it without downtime.

  • It has a pause feature, which allows you to temporarily suspend updates to the application

  • Scaling can be done manually or automatically based on metrics such as CPU utilization or requests per second.

  • Deployment will create ReplicaSet, ReplicaSet will created Pods.

  • If you delete Deployment, it will delete ReplicaSet and then ReplicaSet will delete Pods.

Command to create a Deployment:

kubectl create deployment deployment-name --image=image-name --replicas=4

Manifest file to create Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: cont-1
        image: shaikmustafa/dm
        ports:
        - containerPort: 80

Apply the YAML File:

kubectl apply -f deployment.yaml
  • To create a replica set :

      kubectl create -f replicaset-nginx.yaml
    
  • To see a deployment :

      kubectl get deployment
    
  • To see full details of deployment :

      kubectl get deployment -o wide
    
  • To view in YAML format :

      kubectl get deployment -o yaml
    
  • To get all info

      kubectl describe deploy
    
  • To check logs :

      kubectl logs pod_name
    
  • To delete a deployment :

      kubectl delete deployment deploy_name
    

Updating Image on Deployment

Background:
You’re a DevOps engineer responsible for managing the deployment of an e-commerce application on Kubernetes. The application is built using a containerized microservices architecture, and the backend service (handling user transactions) is currently running a specific version of the container image.

Current Situation:

  • Your application is using Version 1 of the backend service container.

  • Your developers have finished working on Version 2 of the backend service container, which includes new features, and bug fixes.

  • Your users are actively interacting with the e-commerce site, so you want to update the image without causing downtime or disruptions.

Step-by-Step Process:

  1. Preparation (Get the New Image Ready): The developers finish building the new backend image for version-2, which includes improvements like faster checkout processing and updated security patches. This new image is stored in your container registry (e.g., DockerHub).

  2. Update the Deployment: As the DevOps engineer, you use the kubectl command to update the deployment’s image. The new image will be rolled out gradually to ensure the application continues to run smoothly without any downtime.

    Command to update the image:

     kubectl set image deployment/deployment-name cont-1=new-image-name
    
  3. Rolling Update (Zero Downtime): Kubernetes automatically triggers a rolling update. It starts by creating new pods with the new image (v2). These new pods are added to the deployment, while the old pods running v1 continue to serve traffic.

    • Step 1: Kubernetes creates new pods with the updated image.

    • Step 2: Traffic is gradually routed to the new pods as they become ready.

    • Step 3: The old pods are slowly terminated as the new ones take over.

    • Step 4: Once all new pods are running and healthy, the old pods are completely removed.

  4. Monitor the Rollout: You monitor the rollout to ensure that everything goes as planned and that there are no issues during the deployment.

    Check rollout status:

     kubectl rollout status deployment/deployment-name
    
  5. Verify Post-Update: After the update is complete, you verify that the new version (v2) is running and serving requests.

    Check running pods:

     kubectl get pods
    

    You can also perform application testing to ensure that all features are working as expected in the updated version.

  6. Rollback (If Needed): In case there are any issues with the new image (v2), Kubernetes allows you to rollback to the previous version (v1), ensuring that the system can quickly recover.

    Command to rollback:

     kubectl rollout undo deployment/deployment-name
    
  7. To get the history:

     kubectl rollout history deployment/deployment-name
    
  8. To rollback specific version:

     kubectl rollout undo deployment/deployment-name --to-revision=number
    
  9. To pause a pod in deployment:

     kubectl rollout pause deployment/deployment-name
    
  10. To un-pause a pod in deployment:

kubectl rollout resume deployment/deployment-name

Manual Scaling:

Your website typically gets a steady amount of traffic, but during certain events—like new year, sankranthi or diwali bumper bonanza you know the number of users will spike drastically.

Let’s say you're planning a Diwali sale of offering 50% off on all products, and you need to ensure that your website can handle the sudden increase in traffic without crashing or slowing down.

Step 1: Assessing the Traffic (Identifying the Need for Scaling)

You expect the traffic to increase by 3x during the sale, so you realize that your current 3 Pods (replicas) running the application might not be enough to handle the load. Your application might become slow or unresponsive, and customers might abandon their shopping carts.

Step 2: Manual Scaling

To avoid issues, you decide to manually scale up the number of Pods before the sale begins.

You use the kubectl command to scale the deployment:

kubectl scale deployment deployment-name --replicas=9

This command increases the number of Pods from 3 to 9, so your application can handle the expected traffic.

Step 3: Scaling During the Sale

During the sale, everything is running smoothly with the 9 Pods. However, as the sale starts to wind down and traffic decreases, you notice that you’re now using more resources than necessary, and those 9 Pods are no need to be in running state.

Step 4: Scaling Down After the Event

After the sale ends, you manually scale down the Pods to save resources and cost:

kubectl scale deployment deployment-name --replicas=3

This reduces the number of Pods from 9 back to 3, returning the application to its normal state.

Some other commands on Deployment

  1. Export the YAML of an existing Deployment:

     kubectl get deployment my-app -o yaml > my-app-deployment.yaml
    
  2. Check logs of Pods managed by the Deployment:

     kubectl logs deployment/my-app
    
  3. Get logs of a specific Pod in the Deployment:

     kubectl logs pod-name
    
  4. Run a shell inside a Pod of the Deployment:

     kubectl exec -it pod-name -- bash
    

If you enjoy stories that help you learn, live, and work better, consider subscribing. If this article provided you with value, please support my work — only if you can afford it. You can also connect with me on Linkedin. Thank you!

177
Subscribe to my newsletter

Read articles from Shaik Mustafa directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shaik Mustafa
Shaik Mustafa