Deploying Your Application on AKS with a Rolling Update Strategy

Tauqeer AhmadTauqeer Ahmad
4 min read

In modern cloud-native environments, ensuring continuous delivery without disrupting user experience is a top priority. When running workloads on Azure Kubernetes Service (AKS), you gain built-in support for rolling update deployments, which allow you to release new versions of your application gradually—without bringing down the entire service.

In this guide, we’ll walk through how to configure and deploy your application on AKS using the rolling update strategy to achieve smooth, production-grade releases.

🔧 What is a Rolling Update?

A rolling update gradually replaces instances of the old version of your application with the new one, minimizing service disruption.

  • Ensures zero downtime deployment.

  • Maintains high availability by keeping some pods running at all times.

  • Provides a rollback path if the new version fails.

☁️ Prerequisites for Deployment on AKS

Before you get started, ensure you have the following set up:

  • An Azure subscription

  • AKS cluster created and configured

  • kubectl configured to connect to your AKS cluster

  • Docker image of your application available in a container registry (Azure Container Registry, Docker Hub or GitHub Container Registry)

  • Kubernetes Deployment YAML for your application

Let’s have some hands-on by trying deploying the application -

Let’s check the file structure first.

k8s/
├── configmap.yaml
├── deployment.yaml
├── hpa.yaml
├── ingress.yaml
├── namespace.yaml
├── secret.yaml
└── service.yaml

Let’s dive in and understand each one of the files here:

Deployment.yaml - Defines how to deploy and manage a set of replicated Pods in Kubernetes, specifying container images, replicas, and update strategies.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: email-verifier
  namespace: email-verifier
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: email-verifier
  template:
    metadata:
      labels:
        app: email-verifier
    spec:
      containers:
      - name: email-verifier
        image: tauqeerops/email-verifier:latest
        ports:
        - containerPort: 3000

Key configurations:

  • maxUnavailable: 1 means at most one pod can be unavailable during the update.

  • maxSurge: 1 allows one extra pod to be added temporarily.

Service.yaml - Exposes a set of Pods as a network service, enabling stable access via a DNS name and load balancing across Pods.

apiVersion: v1
kind: Service
metadata:
  name: email-verifier-svc
  namespace: email-verifier
spec:
  type: ClusterIP
  selector:
    app: email-verifier
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

Ingress.yaml - Configures external access to services in a Kubernetes cluster, typically via HTTP/HTTPS, using rules to route traffic to the appropriate services.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: email-verifier-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: /(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: email-verifier-svc
                port:
                  number: 3000

configmap.yaml, hpa.yaml, namespace.yaml, and secret.yaml are used to configure environment-specific settings, autoscaling behavior, logical separation of resources, and sensitive data management respectively; while they are not always mandatory for a basic Kubernetes setup, they play a crucial role in production-grade, secure, and scalable deployments.

Implementing the resources over AKS:

Kindly connect to the cluster using the following command:

az aks get-credentials --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_AKS_CLUSTER_NAME

🚀 Deploying to AKS

Once your deployment YAML is ready, deploy it using:

kubectl apply -f k8s/deployment.yaml && kubectl apply -f k8s/service.yaml

Kubernetes will start replacing the old pods with new ones based on the strategy parameters defined.

You can monitor the rollout with:

# Watch the rollout status
kubectl rollout status deployment/email-verifier -n email-verifier

# Check the pods transition
kubectl get pods -n email-verifier -w

🔄 Updating the Application Version

To update your app (e.g., to a new Docker image tag), modify the image version in your deployment YAML or use:

kubectl set image deployment/email-verifier email-verifier=tauqeerops/email-verifier:v2 -n email-verifier

Kubernetes will handle the rolling update automatically. Use this command if you are not serving through deployment.yaml.

📉 Rolling Back in Case of Failure

If your update introduces issues, you can quickly roll back:

# Rollback to immediate previous version
kubectl rollout undo deployment/email-verifier -n email-verifier

# Rollback to specific revision
kubectl rollout undo deployment/email-verifier --to-revision=2 -n email-verifier

Kubernetes will revert to the last working version without any manual intervention.

📊 Observability During Rolling Updates

Monitoring is key during updates. Use tools like:

  • Azure Monitor for containers

  • Prometheus and Grafana

  • kubectl describe pods / get events for real-time debugging

These tools help you ensure the health of your application throughout the update process.

Learn to implement more on Azure Kubernetes Service from here.

Kindly install Ingress (if you are using Nginx) for the deployment and routing from here.

Accessing the Applications

Use external IP ingress to access the application with the path:

For Service: http://48.217.216.247

Flow of the things happening while deploying all the resources:

✅ Best Practices

  • Always test updates in a staging environment.

  • Use readiness probes to control pod availability during updates.

  • Leverage canary deployments for safer releases.

  • Monitor with dashboards and alerts during rollout.

📌 Conclusion

Rolling updates are essential for modern application delivery. By using AKS and Kubernetes best practices, you can achieve zero downtime deployments, rapid iteration, and safe rollback mechanisms—all while keeping your users happy.

0
Subscribe to my newsletter

Read articles from Tauqeer Ahmad directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tauqeer Ahmad
Tauqeer Ahmad

I am student developer much focusing upon community learning and enabling community as well. Write to me at: hellotauqeer@gmail.com