Production Level Blue-Green Deployment CICD Pipeline

Amit singh deoraAmit singh deora
18 min read

Basic understanding of Project Blue-Green Deployment

We will deploy an application using blue-green deployment strategy. That means there will be two environment, blue environment and green environment.

Scenario where our application is currently deployed to blue environment and we want to upgrade our application with some features.

Check out LinkedIn post here

Read Github Repo of project

Written the source code for the new features and for the application, we can deploy it to next environment which is green and switch the traffic from blue environment to green environment.

Main Benefit: blue-green deployment strategy is that there is zero downtime.

We will up the EKS cluster where we are going to deploy the application, how we are going to set up all the tools required and all the different servers needed like Sonar, Kip server, Nexus server and more.

We will build with parameter option

Parameter section where we can decide where we want to deploy. We want to deploy to blue environment or green. If you select green then don’t forget to check Switch traffic option.

Best part about using blue green deployment strategy

It supports zero downtime. That means if we upgrade our application from version one to version two, there won't be any downtime for the application upgrade.

If you liked the work, you could buy a coffee


Taking simple example to understand our Project flow

Understanding blue green deployment, simple example of an application.

So we are taking an example where we are having an application with this having database of mysql and the application source code → frontend & backend is written in written using Java, and HTML.

When we deploy the application inside Kubernetes- Pods Creation with Services in Kubernetes see simple example below

  1. MySQL Pod (Database)

  • The MySQL Pod is deployed inside the Kubernetes cluster.

  • It is exposed using a ClusterIP Service (default service type in Kubernetes).

  • Purpose:

    • ClusterIP Service allows internal communication between different pods inside the Kubernetes cluster.

    • The database is not exposed to the external world, ensuring security.

  1. Main Application Pod (Frontend + Backend)

  • The Main Application Pod contains Java code for both frontend and backend.

  • It is exposed using a LoadBalancer Service.

  • Purpose:

    • LoadBalancer Service allows external access.

    • Users can send requests to the LoadBalancer URL to access the application.

  1. Communication Between Pods

  • If the Main Application Pod needs to communicate with the MySQL Pod, it cannot directly access it.

  • Instead, it sends requests to the ClusterIP Service of the database.

  • The ClusterIP Service then routes the request to the MySQL Pod.

Summary of Setup

MySQL Pod → Exposed via ClusterIP Service (Internal Communication)
Main App Pod → Exposed via LoadBalancer Service (External Access)
App ↔ Database Communication → Done via ClusterIP Service between pods.

What is Blue-Green Deployment with our simple example below?

It's a deployment strategy that helps in upgrading applications without downtime by having two identical environments:

  • Blue Environment (Current Version - v1)

  • Green Environment (New Version - v2)

How It Works?

  1. Deploy the Current Version (Blue)

    • Deploy MySQL Pod and expose it using a ClusterIP Service.

    • Deploy App Pod (v1) and expose it using a LoadBalancer Service.

    • The LoadBalancer routes traffic to Blue (v1).

  2. Deploy the New Version (Green): Developers create a new version (v2) of the app with updated features. A new App Pod (v2) is deployed separately. The Green (v2) Pod is ready but not receiving traffic yet.

  3. Switch Traffic Without Downtime: Update the LoadBalancer Service to point to the Green (v2) environment instead of Blue. Now, users are accessing v2 without downtime.

  4. Remove the Old Version (Optional): Once v2 is stable, the Blue (v1) Pod can be deleted.

Key Benefits of Blue-Green Deployment

Zero Downtime – Users don’t experience disruptions.
Instant Rollback – If v2 has issues, switch back to v1.
Safe Testing – Green (v2) can be tested before making it live.

Practical Setting up EKS Cluster

  • Infrastructure Setup: Determine the infrastructure where you will deploy the application.

  • Virtual Machine (EC2) Creation: Set up a virtual machine to run the Terraform commands for creating the (EKs) cluster. Will install terraform and create EKS cluster with help of terraform commands

  • Create Virtual Machine “server”: (will use this machine to create EKS Cluster)

    • Name the VM: "server".

    • Select Ubuntu version: 24.04.

    • Choose instance size: T2 Medium.

    • 20 gb storage

  • You can open these ports in security group, of course not recommended for production environment.

    Launch Instance

  • Setting up Terraform inside this VM - “Server”

  • Connect to this VM

  •                   sudo apt-get update
    
  • We are going to use AWS CLI to connect to our AWS account with this VM

  •                     sudo apt update
                        sudo apt install unzip curl -y
                        curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
                        unzip awscliv2.zip
                        sudo ./aws/install
                        aws --version
    

      aws configure
      aws s3 ls # To check
    

    We are going to install Terraform in VM “Server”

  •           sudo snap install terraform --classic
    

    Successfully installed the terraform.

  • Clone the repo to our “Server” EC2 now

  • $git clone https://github.com/amitsinghs98/Blue-Green-Deployment.git

  •           cd Blue-Green-Deployment/Cluster
              terraform init 
              #Inside cluster directory our all the terraform files are p
    
  • $terraform plan : to see what are resources we will be creating on our AWS

  • $terraform apply : this command will apply the changes and will create the mentioned infrastructure to our AWS.

  • Before implementing apply make sure you have changed your variables.tf file in Cluster directory. Change the default value to your key-value pair of current ec2 instance which we have named “Server”

  • We have successfully setup the EKS Cluster with help of terraform file.

Install Kubectl

Before connecting to the EKS cluster, you need to install kubectl.
Run the following command:

sudo snap install kubectl --classic

Connect to the EKS Cluster

After creating an EKS cluster, you need to configure kubectl to connect to it.
Run the following command:

#aws eks --region <your-region> update-kubeconfig --name <your-cluster-name>
aws eks --region ap-south-1 update-kubeconfig --name amit-cluster
  • This command creates a kubeconfig file and allows you to authenticate to the cluster.

  • To verify connectivity, run:
kubectl get nodes
  • If the command fails, it means you do not have permissions to connect to the cluster. But since you have update kubeconfig file, you will be authenticated and ready to manipulate the EKS Cluster using your Server instance

Create 3 EC2 instance with t2.medium and setup

  1. NEXUS - run with docker container

  2. SonarQube - run with docker container

  3. Jenkins - install in machine directly.

  • Install Jenkins on EC2 Machine “Jenkins”

  • Setup SonarQube and NEXUS with help of Docker container on both EC2 instances- Install docker container in rest of the ec2 machine

Hopefully you would have setup all the configurations.

  1. Installing Nexus with docker on our Nexus instance

  • $docker run -itd -p 8081:8081 sonatype/nexus3

  • Access Nexus: http:<ip-address>:8081

  • Go inside container and fetch the password for nexus

  • docker exec -it <cont id> sh

  • cd sonatype-work/nexus3

  • cat admin.password

  • To login to Nexus we need to get the password by logging in the docker container directly from terminal, like we did above.

  • Installing SonarQube with docker on our SonarQube instance

  • $docker run -d -p 9000:9000 sonarqube:lts-community

  • Access SonarQube : http:<ip-address>:9000

  • Note: By default id and password of SonarQube is “admin”

  1. Installing Jenkins on our “Jenkins instance” directly + Configurations with Jenkins

    Make sure you have installed java and Jenkins. Java is a pre-requisite for Jenkins to run.
    Note: You can also install java headless*.* Java headless is a pkg which don’t provide UI to operate. Hence works for the Jenkins.

  • Install trivy on jenkins instance

  •                   sudo apt-get install wget apt-transport-https gnupg lsb-release
                      wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
                      echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
                      sudo apt-get update
                      sudo apt-get install trivy
    

    Because by default Trivy is not available in Jenkins plugins. So we have installed trivy externally to our Jenkins Master node. We will operate Jenkins with single node only currently for building.

  • Install Kubectl on Jenkins instance, because we will have commands to operate for Kubernetes through Jenkins.

    $sudo snap install kubectl —classic


Getting back to our “Server Instance” through which we are operating our EKS Cluster

Login to “Server” EC2 instance and Create a Service Account and Assign RBAC Permissions

Create a Namespace

kubectl create ns webapps
  • This creates a namespace called web-apps where the service account will be deployed.

Creating Service Account

vim sa.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: webapps
kubectl apply -f sa.yml

Create Role

vim role.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: app-role
  namespace: webapps
rules:
  - apiGroups:
        - ""
        - apps
        - autoscaling
        - batch
        - extensions
        - policy
        - rbac.authorization.k8s.io
    resources:
      - pods
      - componentstatuses
      - configmaps
      - daemonsets
      - deployments
      - events
      - endpoints
      - horizontalpodautoscalers
      - ingress
      - jobs
      - limitranges
      - namespaces
      - nodes
      - secrets
      - pods
      - persistentvolumes
      - persistentvolumeclaims
      - resourcequotas
      - replicasets
      - replicationcontrollers
      - serviceaccounts
      - services
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
kubectl apply -f role.yml

Bind the role to service account

vim rolebind.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-rolebinding
  namespace: webapps 
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: app-role 
subjects:
- namespace: webapps 
  kind: ServiceAccount
  name: jenkins
kubectl apply -f rolebind.yml

Create a Token for Authentication | Creating for Service Account

vi sec.yml

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: mysecretname
  namespace: webapps
  annotations:
    kubernetes.io/service-account.name: jenkins # our service account name

To generate token you can also check.

kubectl apply -f sec.yml

Extract token now

kubectl describe secret mysecretname -n webapps

You will see token, save that to a file. To use for authentication in Jenkins.

Login to Jenkins Instance and Configure Jenkins

Store the Kubernetes Token in Jenkins

  1. Go to Jenkins DashboardManage JenkinsManage Credentials

  2. Select Global CredentialsAdd Credentials

  3. Choose Secret Text

  4. Paste the copied Kubernetes token

  5. Set ID as k8-token

  6. Save the credentials.

Install Required Plugins

Go to Manage JenkinsManage PluginsAvailable Plugins, search for and install:

  • SonarQube Scanner

  • Config File Provider

  • Maven Integration

  • Pipeline Maven Integration

  • Pipeline Stage View

  • Docker Pipeline

  • Kubernetes

  • Kubernetes Credentials

  • Kubernetes Client API

  • Kubernetes CLI

Restart Jenkins after installation.


Doing some Configurations in tools before continuing Pipeline

Define Maven

So, for defining the maven plugin: Go to > Manage Jenkins > Tools > Maven Installations > Add Maven > Name: maven3

Defining Sonar-scanner

So, for defining the sonar-scanner plugin: Go to > Manage Jenkins > Tools > SonarQube Scanner > Name: sonar-scanner

Note We can skip jdk17 installation, and docker. Just like Trivy we have installed jdk17 on Jenkins machine directly. As JDK is the pre-requisite for installing Jenkins.


Integrate SonarQube and Jenkins

  1. Go to Sonarqube > Administration

    First create user in Sonarqube

    Go to Sonarqube > Security > user

    Create a token in Administrator to authenticate jenkins

    Login to Jenkins: to add SonarQube token in Jenkins

    Go to manage Jenkins > Credentials

    Go to global > Add credentials

    Click on “add credentials”

    Add credentials: The one we copied the token from sonarqube

Now adding SonarQube Server inside Jenkins

Go to Jenkins > Manage Jenkins > System > SonarQube Installation

Congrats! You have successfully integrated Sonar and Jenkins

Integrate Nexus and Jenkins

Configure Managed Files in Jenkins for Nexus

  1. Go to Jenkins Dashboard → Click on Manage Jenkins.

  2. Click on "Managed files".

  3. Click on "Admin Config" → Select "Global Maven Settings".

  4. Rename it to "Maven-settings".

  1. Click on Next.

  2. Scroll down to the Credentials section for Nexus.

  3. Remove any existing comments in the configuration. Remove comment from server and shift comment to above <server>

  1. Add credentials for the specific Nexus repository.

To fetch the nexus credentials, let’s login to nexus and refer below steps:

Configure Nexus Repository Credentials for Managed Files in Jenkins

  1. Nexus uses two repositories in case of java based application:

    • Maven Releases

  • Maven Snapshots
  1. Copy the repository names and paste them into Jenkins:

    • ID → Repository Name.

    • Username → Admin.

    • Password → Your Nexus password.

  • Do the process for both the repositories.

  • Click Save.

Configure Maven Settings in pom.xml

  • Go to the Nexus repository → Copy the URL.

Copy for both: Maven-releases and Maven-snapshots

  • Navigate to the pom.xml file in your project inside Github repo

  • Enter edit mode.

  • At the end of the file, check for the <repositories> section:

    • If it's not present, add it manually.

  • Update the repository URLs for:

    • Maven Releases.

    • Maven Snapshots.

  • Verify the artifact type (release or snapshot).


Create a Jenkins Pipeline

  1. Go to Jenkins Dashboard

  2. Click New ItemPipeline

  3. Name it blue-green-deployment

  4. Check “Discard Old Builds” > “Max Builds to keep: 2”

  5. Under Build Triggers, enable Poll SCM if needed

  6. Scroll to Pipeline and select Pipeline Script

1st stage: Git Checkout

“Git Checkout” > Generate script > Pipeline Syntax

Add GitHub Personal Access Token as Secret Text in Global Credential to authenticate Jenkins with Github.

Generate syntax

 stage('Git Checkout') {
            steps {
              git branch: 'main', credentialsId: 'git-cred', url: 'https://github.com/amitsinghs98/blue-green-deployment-v1.git'
            }
        }

2nd stage: Compile

We will run compile early because to find out issues at the early stage itself. Compile will able to help what kind of syntax error our source code has.

stage('Compile') {
            steps {
                sh 'mvn compile'
            }
        }

Note: third party plugin which we install in Jenkins, we need to define in our pipeline script. Example: we have installed Trivy in Jenkins machine directly, so we don’t need to define Trivy here.

3rd stage: Tests

Define the Maven tool inside the pipeline using the "tools" block.

tools {
    maven 'maven3'
}

tool type: maven, tool name: maven3, "maven3" is the tool name configured in Jenkins.

To skip test cases (if they are failing), modify it. Third stage adding.

 stage('Test') {  
            steps {  
                sh 'mvn test -DskipTests=true'
            }  
        }

4th stage: Perform File System Scan with Trivy

 curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh
   58  sudo mv /root/bin/trivy /usr/local/bin/trivy
   59  sudo chmod +x /usr/local/bin/trivy
   60  echo $PATH
   61  trivy -v
  stage('Trivy FS Scan') {  
            steps {  
                sh 'trivy fs --format table -o fs.html .'
            }  
        }

5th stage: Configure SonarQube for Code Analysis

SonarQube consists of two parts:

  • SonarQube Server (where reports are stored).

  • SonarQube Scanner (which performs the analysis).

Define SonarQube Scanner in the pipeline first:

environment {
    SCANNER_HOME = tool 'sonar-scanner'
}

This is our home directory of the tool sonar-scanner

Execute SonarQube analysis:

Just name of server will come here ‘sonar’ > as we have mentioned in system config

Since, this is a java project. So, we have targeted “target” file. Target file includes java binaries.

6th Stage: Configure Quality Gate Check

  1. Go to SonarQube AdministrationConfigurationWebhooks.

Go to Configuration > Webhooks

Create a new webhook:

  • Name: Jenkins.

  • URL: http://<JENKINS_URL>/sonarqube-webhook

We are all set: as you can see, we successfully created webhook on SonarQube

Now we can configure Quality Gate Check in pipeline

   stage('Quality Gate Check') {  
            steps {  
               timeout(time: 1, unit: 'HOURS') {
                     waitForQualityGate abortPipeline: true
                }
            }  
        }

We have utilized pipeline syntax for writing code. Two times we have used, for first we have generated timeout and second quality gate.

7th stage: Build Artifacts


        stage('Build Artifacts') {  
            steps {  
                  sh 'mvn package -DskipTests=true'
            }  
        }

We have built artifacts first for publishing to nexus

8th Stage: Publish artifacts to Nexus

We will use pipeline syntax


        stage('Publish Artifacts to Nexus') {  
            steps {  
               withMaven(globalMavenSettingsConfig: 'maven-settings', jdk: '', maven: 'maven3', mavenSettingsConfig: '', traceability: true) {
                    sh 'mvn deploy -DskipsTests=true'
                    }
            }  
        }  
    }  
}

Docker Integration for Blue-Green Deployment

The goal is to:

  1. Build Docker images.

  2. Tag them as blue or green. We will have two docker images one for blue env and second for green env.

  3. Switch traffic to the latest version.

So before building images -

Define Deployment Parameters: Before starting pipeline we can define if this is for blue or green env

Before building images, define parameters to manage the Blue-Green strategy. We will also setup parameters for same.

parameters {
        choice(name: 'DEPLOY_ENV', choices: ['blue', 'green'], description: 'Choose which environment to deploy: Blue or Green')
        choice(name: 'DOCKER_TAG', choices: ['blue', 'green'], description: 'Choose the Docker image tag for the deployment')
        booleanParam(name: 'SWITCH_TRAFFIC', defaultValue: false, description: 'Switch traffic between Blue and Green')
    }

  • DEPLOYMENT_ENV: Defines if the deployment is for blue or green.

  • DOCKER_TAG: Defines the image tag.

  • SWITCH_TRAFFIC: Controls whether to switch traffic. We can decide at runtime whether you want to switch traffic or not. Helps in zero downtime.

Set Up Docker Variables

environment {

    IMAGE_NAME = "amitsinghs98/bankapp"
    TAG = "${params.DOCKER_TAG}"  // The image tag now comes from the parameter
}

We have defined the env variables inside the pipeline only below the agent section, so that we can use these parameters value inside our 9th stage which is build and tag docker image.

9th Stage: Build and Tag Docker Image:

We will take use of pipeline syntax and make syntax. Authenticate with Docker Hub

 stage('Docker Build and Tag Image') {  
            steps {  
                 withDockerRegistry(credentialsId: 'docker-cred') {
                        sh 'docker build -t ${IMAGE_NAME}:${TAG} .'
                    }
                }  
            }

Ensure Docker credentials are configured in Jenkins under Manage Jenkins → Credentials.

10th stage: Docker Image Scan using Trivy

   stage('Trivy Image Scan') {
            steps {
                sh "trivy image --format table -o image.html ${IMAGE_NAME}:${TAG}"
            }
        }

11th Stage: Push Docker Image to Docker Hub


        stage('Docker Push Image') {
            steps {
                script {
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh "docker push ${IMAGE_NAME}:${TAG}"
                    }
                }
            }
        }

Understanding the Architecture before moving to next stage

We have an application with multiple components:

  • MySQL Database

  • Application (Blue and Green versions)

  • Service for the Application

Key Points:

  • Database and Service remain unchanged because data is not modified, and access URLs remain constant.

  • Only the application pod changes during deployment.

  • We will deploy MySQL first, followed by the application components in a structured manner.

12th Stage: Deploy MySQL Deployment with it’s Service

Change this serverURL → to our EKS API Server Endpoint

and cluster name to your EKS cluster

stage('Deploy MySQL Deployment and Service') {
            steps {
                script {
                    withKubeConfig(caCertificate: '', clusterName: 'amit-cluster', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://6577D435A2B49DDF4EF2774516148399.gr7.ap-south-1.eks.amazonaws.com') {
                        sh "kubectl apply -f mysql-ds.yml -n ${KUBE_NAMESPACE}"  // Ensure you have the MySQL deployment YAML ready
                    }
                }
            }
        }

13th Stage: Deploy Application Load Balancer Service

stage('Deploy SVC-APP') {
            steps {
                script {
                    withKubeConfig(caCertificate: '', clusterName: 'amit-cluster', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://46743932FDE6B34C74566F392E30CABA.gr7.ap-south-1.eks.amazonaws.com') {
                        sh """ if ! kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}; then
                                kubectl apply -f bankapp-service.yml -n ${KUBE_NAMESPACE}
                              fi
                        """
                   }
                }
            }
        }
  • This will expose the application to the outside world.

  • The LoadBalancer will route traffic to the correct pod (Blue or Green).

  • With this LB we can get URL to access the application

We have a condition if service is not present then only we will deploy the service

Initially we will setup version → Blue

Blue-Green Deployment Strategy

14 Stage: Blue green deployment trigger according to the parameters we choose at runtime

 stage('Deploy to Kubernetes') {
            steps {
                script {
                    def deploymentFile = ""
                    if (params.DEPLOY_ENV == 'blue') {
                        deploymentFile = 'app-deployment-blue.yml'
                    } else {
                        deploymentFile = 'app-deployment-green.yml'
                    }

                    withKubeConfig(caCertificate: '', clusterName: 'amit-cluster', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://6577D435A2B49DDF4EF2774516148399.gr7.ap-south-1.eks.amazonaws.com') {
                        sh "kubectl apply -f ${deploymentFile} -n ${KUBE_NAMESPACE}"
                    }
                }
            }
        }

MOST IMPORTANT SWITCHING Traffic from blue / green deployment

15th Stage: Switching Traffic

Here we will decide on which deployment our Load Balancer URL will point towards

 stage('Switch Traffic Between Blue & Green Environment') {
            when {
                expression { return params.SWITCH_TRAFFIC }
            }
            steps {
                script {
                    def newEnv = params.DEPLOY_ENV

                    // Always switch traffic based on DEPLOY_ENV
                    withKubeConfig(caCertificate: '', clusterName: 'amit-cluster', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://6577D435A2B49DDF4EF2774516148399.gr7.ap-south-1.eks.amazonaws.com') {
                        sh '''
                            kubectl patch service bankapp-service -p "{\\"spec\\": {\\"selector\\": {\\"app\\": \\"bankapp\\", \\"version\\": \\"''' + newEnv + '''\\"}}}" -n ${KUBE_NAMESPACE}
                        '''
                    }
                    echo "Traffic has been switched to the ${newEnv} environment."
                }
            }
        }
kubectl patch service bankapp-service -p "{\\"spec\\": {\\"selector\\": {\\"app\\": \\"bankapp\\", \\"version\\": \\"''' + newEnv + '''\\"}}}" -n ${KUBE_NAMESPACE}

This command will help to switch the traffic

We have used a When condition: this stage will only run if the switch traffic is enabled

Last Stage: To Verify the deployment

15th Stage: Verify Deployment

 stage('Verify Deployment') {
            steps {
                script {
                    def verifyEnv = params.DEPLOY_ENV
                    withKubeConfig(caCertificate: '', clusterName: 'amit-cluster', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://46743932FDE6B34C74566F392E30CABA.gr7.ap-south-1.eks.amazonaws.com') {
                        sh """
                        kubectl get pods -l version=${verifyEnv} -n ${KUBE_NAMESPACE}
                        kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}
                        """
                    }
                }
            }
        }

Jenkins Pipeline Steps

  1. Fetch the EKS Cluster Endpoint

  2. Deploy MySQL

  3. Deploy Application Service

  4. Deploy Application Pod (Blue or Green)

  5. Switch Traffic Based on Parameters

  6. Verify Deployment

Now we are moving ahead with green environment and switch traffic from blue to green

Final Thoughts

  • First, deploy Blue and verify it works.

  • For upgrades, deploy Green and switch traffic.

  • Use Jenkins to automate the entire process.

  • Keep both versions running for rollback. 🚀

0
Subscribe to my newsletter

Read articles from Amit singh deora directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Amit singh deora
Amit singh deora

DevOps | Cloud Practitioner | AWS | GIT | Kubernetes | Terraform | ArgoCD | Gitlab