Zero-Downtime Delivery with Blue-Green Deployment on AWS EKS

Table of contents
- Infrastructure Setup
- Security Configuration
- Launch EC2 Instances
- Tool Installation and Configuration
- EKS Cluster Provisioning
- 🖥️ Post-Launch Setup on the EKS Server
- Breakdown of Terraform Code for EKS Cluster Provisioning
- Terraform Variables – variables.tf
- Terraform Outputs – output.tf
- Provisioning the EKS Cluster with Terraform
- Create Service Account, Role, ClusterRole & Assign that role, And create a secret for Service Account and genrate a Token
- Jenkins CI/CD Pipeline
- Blue-Green Deployment Implementation
- 🏁 Conclusion

Git Repository: https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD
In today's fast-paced DevOps world, delivering high-quality applications with minimal downtime is a key business requirement. Blue-Green Deployment has emerged as a popular strategy to achieve zero-downtime releases by maintaining two separate production environments—Blue (current live) and Green (new version).
This comprehensive guide will walk you through the fundamentals and practical implementation of a Production-Ready CI/CD Pipeline using the Blue-Green Deployment strategy. Whether you're deploying a microservice on AWS ECS, Kubernetes, or using Jenkins and Docker, this guide will help you understand how to build a resilient deployment workflow that reduces risk, increases reliability, and enhances user experience.
Infrastructure Setup
Step 1: Provisioning EC2 Infrastructure for CI/CD Pipeline
To build a robust and scalable Blue-Green Deployment CI/CD pipeline, we'll launch four dedicated EC2 instances, each with a specific purpose. These servers will be deployed on Ubuntu 24.04 LTS, with sufficient storage and public network access.
EC2 Server Overview
Server Name | Purpose | OS Version | Storage | Public Access | Role |
Master Server | Orchestration | Ubuntu 24.04 | 25 GB | Yes | Central admin node |
Jenkins Server | CI/CD Automation | Ubuntu 24.04 | 25 GB | Yes | Pipeline runner |
SonarQube Server | Code Quality | Ubuntu 24.04 | 25 GB | Yes | Static analysis |
Nexus Server | Artifact Storage | Ubuntu 24.04 | 25 GB | Yes | Binary repository |
Security Configuration
🔐 Security Group Configuration
Before launching these servers, you must create a Security Group with access to required ports:
✅ Inbound Rules:
Port | Protocol | Purpose |
22 | TCP | SSH access (manual) |
80 | TCP | HTTP traffic |
443 | TCP | HTTPS traffic |
8080 | TCP | Jenkins, Nexus |
8081 | TCP | Nexus Repository |
9000 | TCP | SonarQube Dashboard |
🔒 Security Tip: You ca****n restrict SSH access (p****ort 22) to your own IP for enhanced security.
🔑 Create a Key Pair
To securely connect to each instance via SSH:
Go to the EC2 Dashboard > Key Pairs
Click Create key pair
Set a name like ci-cd-key
Choose RSA or ED25519, then download the .pem file securely
Keep this file safe and private—use it when connecting to the servers
Launch EC2 Instances
🚀 Launch EC2 Instances (Repeat for Each Server)
For each server, follow these steps
· Go to EC2 Dashboard → Launch Instance
· Select Ubuntu 24.04 LTS AMI
· Choose instance type (e.g., t3.medium for Jenkins, t2.medium for others)
· Attach public subnet
· Set storage to 25 GB
· Assign the security group created above
· Use the key pair created earlier
· Tag each instance appropriately:
1. Name: Master Server
2. Name: Jenkins Server
3. Name: SonarQube Server
4. Name: Nexus Server
Once these instances are up and running, you'll have a strong foundation for building your CI/CD pipeline with automated build, test, quality check, and deployment workflows.
Tool Installation and Configuration
🗂️ Step-by-Step: Installing and Accessing Nexus on an EC2 Server via Docker
In this section, we’ll install and run Nexus Repository Manager inside a Docker container on your EC2 instance. Nexus will serve as your private artifact repository for storing Maven, npm, Docker, and other packages used in your DevOps pipeline.
🔐 Step 1: SSH into the Nexus Server
Use your .pem key and the server’s public IP to connect via SSH:
ssh -i /path/to/your-key.pem ubuntu@<your-public-ip> |
🔄 Step 2: Update the Server
Before installing any packages, update your package repositories
sudo apt update && sudo apt upgrade -y |
🐳 Step 3: Install Docker
Install Docker to run Nexus as a container:
sudo apt install -y docker.io |
👤 Step 4: Create a Docker User (Non-root)
It’s a good practice not to run containers as root. Add your Ubuntu user to the Docker group:
sudo usermod -aG docker ubuntu |
🔁 Log out and log back in (or run newgrp docker) for changes to take effect.
📦 Step 5: Run the Nexus Docker Container
docker run -d -p 8081:8081 --name nexus sonatype/nexus3 |
🌐 Step 6: Access Nexus in Browser
Open your browser and navigate to:
http://<your-ec2-public-ip>:8081 |
You’ll see the Nexus login page.
🔑 Step 7: Get the Default Admin Password
The default username is admin, but to retrieve the password, you need to enter the running container:
docker exec -it nexus /bin/bash |
🔁 Step 8: Login and Change the Password
● Go back to your browser
Login with:
○ Username: admin
○ Password: *(paste the value from above)
*● Follow the prompt to set your own secure password
Now Nexus is ready to use as your private artifact repository! 🎉
🔎 Step-by-Step: Installing SonarQube on EC2 Using Docker
SonarQube is a powerful tool for continuous code quality inspection. It can analyze code for bugs, vulnerabilities, and code smells. In this guide, we'll set up SonarQube using Docker on an EC2 instance.
🔐 Step 1: SSH into the SonarQube Server
Use your .pem key and public IP to log in:
ssh -i /path/to/your-key.pem ubuntu@<your-public-ip> |
🔄 Step 2: Update the Server
Always begin by updating system packages:
sudo apt update && sudo apt upgrade -y |
🐳 Step 3: Install Docker
SonarQube will run inside a Docker container. Install Docker using:
sudo apt install -y docker.io |
👤 Step 4: Create a Docker User (Non-root)
Avoid running Docker as the root user. Add your current user (e.g., ubuntu) to the Docker group:
sudo usermod -aG docker ubuntu |
🔁 Log out and log back in (or run newgrp docker) for changes to take effect.
📦 Step 5: Run the SonarQube Docker Container
Now, run the SonarQube container:
docker run -d --name sonarqube -p 9000:9000 sonarqube:latest |
This will:
● Run SonarQube detached
● Map the default web interface to port 9000
🌐 Step 6: Access SonarQube in Your Browser
http://<your-ec2-public-ip>:9000 |
You should see the SonarQube login page.
🔐 Step 7: Default Credentials
Use the following to log in:
● Username: admin
● Password: admin
🔁 Step 8: Change the Default Password
After logging in for the first time, SonarQube will prompt you to change the default password. Enter a secure password of your choice to continue.
🔐 How to Generate a SonarQube Token and Set Up a Webhook
To integrate SonarQube with Jenkins (or any CI/CD tool), you'll need an authentication token and a webhook for real-time Quality Gate feedback.
🔑 Step 1: Generate SonarQube Token
Login to your SonarQube dashboard.
On the top right, click on your user avatar → My Account.
Navigate to the "Security" tab.
In the "Generate Tokens" section:
○ Name: Enter a descriptive name (e.g., jenkins-token).
○ Click "Generate".
- Copy the token and save it securely (you won’t be able to see it again).
✅ Use this token as a Secret Text credential in Jenkins under:
Manage Jenkins → Credentials → Global → Add Credentials
🔁 Step 2: Configure Webhook in SonarQube
Webhooks allow SonarQube to send Quality Gate status back to Jenkins.
- From the SonarQube main dashboard, go to:
Projects → [Select your project]
2. Click on the "Project Settings" (gear icon).
3. Go to "Webhooks".
4. Click "Create".
● Name: Jenkins
● URL:
http://<your-jenkins-server>:<port>/sonarqube-webhook/ |
Click "Save".
EKS Cluster Provisioning
🖥️ Post-Launch Setup on the EKS Server
After launching the EC2 instance that serves as the EKS Server, we will connect to it using the public IP address and the .pem key file. Once connected, perform the following setup steps to prepare the server for provisioning the EKS cluster:
🔧 Tasks to Perform on the Master Server:
Update the OS Package Repository
sudo apt update -y |
Install AWC cli on this service
sudo apt update -y && \ |
Terraform Installation
sudo apt update -y && \ |
Create an IAM User with Access & Secret Key
To securely interact with AWS services using the AWS CLI, Terraform, or CI/CD tools like Jenkins, we need to create a dedicated IAM user with programmatic access.
Follow these steps to create an IAM user with Access Key ID and Secret Access Key:
Create IAM User with Programmatic Access
Go to the AWS Management Console
Navigate to IAM > Users
Click Add users
Configuration Steps
User Details
User name: Blue-Green-Cred
Access type: ✅ Programmatic access
Set Permissions
Choose Attach policies directly
Select AdministratorAccess
Tags (Optional)
Key: Project
Value: Blue-Green-CICD
Review and Create
- Review the configuration and click Create user
Save Your Access Credentials
Access Key ID
Secret Access Key
Click Download .csv and store it securely.
Configure AWS CLI with Access & Secret Key
After creating an IAM user with programmatic access, the next step is to configure the AWS CLI on your EC2 server.
🔧 Run aws configure
On your EC2 instance (Ubuntu 24.04), open the terminal and run:
aws configure
📝 When prompted, enter the following:
aws configure |
Replace <Your Access Key ID> and <Your Secret Access Key> with the credentials you downloaded earlier.
Breakdown of Terraform Code for EKS Cluster Provisioning
But before we launch the infrastructure, it's important to understand the Terraform code we’re about to apply.
This review will give you clarity on how the EKS cluster is being configured — including VPC setup, subnets, IAM roles, security groups, and the actual cluster and node group definitions.
Taking a moment to walk through the code ensures:
● You know what resources will be created
● You can identify any region-specific or account-specific values
● You ensure secure and optimized resource configurations
In the next section, we'll break down each major block of the Terraform script and then proceed with initialization and deployment.
📌 1. Provider Configuration
provider "aws" { |
This tells Terraform to use the AWS provider and deploy resources in the us-east-1 region.
2. VPC & Subnets
resource "aws_vpc" "DevOpsSubroto_vpc" { |
Creates a custom Virtual Private Cloud (VPC) with a /16 CIDR block. This will serve as the networking layer for all your AWS resources.
resource "aws_subnet" "DevOpsSubroto_subnet" { |
Creates two public subnets, one in each availability zone (us-east-1a and us-east-1b). They are assigned unique CIDR ranges using cidrsubnet().
3. Internet Gateway & Route Table
resource "aws_internet_gateway" "DevOpsSubroto_igw" { ... } |
Defines a route table with a default route (0.0.0.0/0) pointing to the IGW.
resource "aws_route_table_association" "a" { |
Associates the route table with each subnet, making them public subnets.
🔐 4. Security Groups
resource "aws_security_group" "DevOpsSubroto_cluster_sg" { ... } |
Creates a security group for the EKS control plane with open egress (outbound) rules.
resource "aws_security_group" "DevOpsSubroto_node_sg" { ... } |
Creates a security group for worker nodes with open ingress and egress. This should be tightened in production for better security.
☸️ 5. EKS Cluster and Node Group
resource "aws_eks_cluster" "DevOpsSubroto" { |
Provisions the EKS control plane, specifying:
● VPC subnets
● IAM role for EKS
● Security group for the control plane
resource "aws_eks_node_group" "DevOpsSubroto" { |
Provisions a node group with:
● 3 EC2 instances of type t2.large
● SSH access enabled
● Role for EKS worker nodes
● Public subnet association
🔐 6. IAM Roles and Policies
resource "aws_iam_role" "DevOpsSubroto_cluster_role" { ... } |
Creates an IAM role assumed by the EKS control plane, allowing it to manage AWS resources.
resource "aws_iam_role_policy_attachment" "DevOpsSubroto_cluster_role_policy" { |
Attaches the required AmazonEKSClusterPolicy to the control plane role
🔧 Node Group IAM Role
resource "aws_iam_role" "DevOpsSubroto_node_group_role" { ... } |
Creates an IAM role for EC2 worker nodes with the following permissions:
● AmazonEKSWorkerNodePolicy: Core EKS node functionality
● AmazonEKS_CNI_Policy: Allows Kubernetes networking
● AmazonEC2ContainerRegistryReadOnly: Pull images from ECR
provider "aws" { |
Terraform Variables – variables.tf
To make our Terraform code more reusable and configurable, we define variables in a separate file called variables.tf. One such variable is the SSH key name, which enables secure remote access to your EC2 worker nodes.
variable "ssh_key_name" { |
Terraform Outputs – output.tf
Once your Terraform configuration is applied, it's important to extract and display key resource details. This is where the output.tf file comes in.
Terraform outputs allow you to view important resource identifiers that you can reuse for:
● Debugging
● Interconnecting with other modules
● Future automation and scripting
output "cluster_id" { |
Clone Your Git Repository into the Server
Now that your EC2 instance is configured with AWS CLI and other essentials, the next step is to clone your Git repository. Use the following command to clone your repository. Replace the URL with your actual Git repository:
git clone https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD.git |
Provisioning the EKS Cluster with Terraform
After cloning the Git repository, we’ll navigate into the project directory that contains the Terraform configuration files to provision the EKS cluster.
We are using Terraform to launch and manage the EKS infrastructure on AWS.
To move into the directory where your Terraform code is located, run the following command:
cd BlueBlue-Green-Deployment-with-CICD |
Now it's time to provision the EKS cluster using Terraform.
🔧 1. Initialize Terraform
terraform init |
This command initializes your working directory. It:
● Downloads the AWS provider plugin
● Prepares the backend (if configured)
● Sets up required modules
🧪 2. Validate Terraform Code
terraform validate |
This checks whether your configuration files are syntactically valid.
📝 3. Preview Infrastructure Changes
terraform plan |
This command gives you a dry run of what Terraform will do when applied:
● What resources will be created, modified, or destroyed
● Helps avoid unintended changes
⚙️ 4. Apply the Infrastructure
terraform apply —-auto-approve |
After run the command it will take 8-10 minute to provision the server
Verifying the EKS Cluster and Nodes
Now that we've applied the Terraform configuration, it’s time to verify whether the EKS cluster and its associated worker nodes have been created successfully.
✅ 1. Install ku****bectl
To interact with your Kubernetes cluster, you need the Kubernetes CLI tool—kubectl. Install it using:
sudo snap install kubectl --classic |
📡 2. Try Fetching the Node List
After installation, try running:
kubectl get nodes |
However, you’ll notice that no nodes or cluster information is displayed. That’s because your system is not yet connected to the EKS cluster.
🔗 3. Update Your Kubeconfig File
To connect kubectl with your EKS cluster, update the kubeconfig file using the AWS CLI: the AWS CLI:
aws eks --region us-east-1 update-kubeconfig --name DevOpsSubroto-cluster |
🔄 Replace us-east-1 with your actual region and DevOpsSubroto-cluster with your EKS cluster name, if different.
📋 4. Re-check Cluster Status
After updating the kubeconfig file, run:
kubectl get nodes |
You should now see a list of worker nodes in Ready status, confirming that your EKS cluster is up and running
Create Service Account, Role, ClusterRole & Assign that role, And create a secret for Service Account and genrate a Token
🔧 Creating a ServiceAccount for Jenkins in Kubernetes
To allow Jenkins to interact with your Kubernetes cluster—for example, to deploy applications or manage resources via a pipeline—you need to create a ServiceAccount.
A ServiceAccount provides an identity for processes running in a Pod, enabling secure API access without relying on user credentials.
jenkins-serviceaccount.yaml
apiVersion: v1 |
To create this ServiceAccount in your cluster, run:
kubectl apply -f jenkins-serviceaccount.yaml |
You can verify creation with:
kubectl get serviceaccount jenkins -n webapps |
🔐 Defining Kubernetes Role for Jenkins
To allow Jenkins to deploy applications and interact with Kubernetes resources securely, we need to assign specific permissions using RBAC (Role-Based Access Control). Below is a Kubernetes Role that grants wide access to various resources in the webapps namespace.
jenkins-role.yaml |
To apply jenkins-role.yaml
kubectl apply -f jenkins-role.yaml |
This Role grants broad access to many resources. It’s suitable for a CI/CD tool like Jenkins, but in production environments, you may want to restrict resources and verbs to follow the principle of least privilege.
🔗 Binding the Role to Jenkins with a RoleBinding
Now that we’ve created a Role with the necessary Kubernetes permissions, the next step is to bind that Role to the Jenkins ServiceAccount. This is done using a RoleBinding, which links a Role to a user, group, or ServiceAccount within the same namespace.
jenkins-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 |
Apply the RoleBinding
kubectl apply -f jenkins-rolebinding.yaml |
You can confirm the RoleBinding with:
kubectl get rolebinding app-rolebinding -n webapps |
With this RoleBinding in place, your Jenkins pods running in the webapps namespace can now access Kubernetes resources as defined in the app-role.
🔐 Creating a Secret Token for a ServiceAccount
To enable an external system (like Jenkins) to authenticate with your Kubernetes cluster using a ServiceAccount, you can manually create a Secret of type kubernetes.io/service-account-token.
This secret contains a token and certificate that can be used for API access.
serviceaccount-token-secret.yaml
apiVersion: v1 |
✅ Apply the Secret
kubectl apply -f serviceaccount-token-secret.yaml |
Add Kubernetes Secret Token to Jenkins Credentials
Now that we’ve created a service account token as a Kubernetes Secret, the next step is to copy the token and add it to Jenkins as a global credential. This token allows Jenkins to authenticate and deploy workloads into the EKS cluster.
Jenkins CI/CD Pipeline
⚙️ Step-by-Step: Jenkins Installation and DevOps Configuration on EC2
In this section, we’ll set up a Jenkins CI/CD server on an EC2 instance. Jenkins will serve as the automation hub for building, testing, analyzing, and deploying applications using Docker, Nexus, and SonarQube.
🔐 Step 1: SSH into the Jenkins Server
Connect to your EC2 instance using the .pem key and public IP:
ssh -i /path/to/your-key.pem ubuntu@<your-public-ip> |
🔄 Step 2: Update the Server
Update package lists and install available updates:
sudo apt update && sudo apt upgrade -y |
☕ Step 3: Install Java 17
Jenkins requires Java to run. Install Java 17 (recommended):
sudo apt install -y openjdk-17-jdk |
🛠️ Step 4: Install Jenkins
Add Jenkins repository and install:
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - |
👤 Step 5: Add Jenkins User to Docker Group
If Docker is already installed (or will be), add the Ubuntu user to the Jenkins group:
sudo usermod -aG jenkins ubuntu |
🌐 Step 6: Access Jenkins via Browser
Open Jenkins in your browser:
http://<your-ec2-public-ip>:8080 |
You'll see the initial unlock screen.
🔑 Step 7: Retrieve Jenkins Initial Admin Password
Copy the command shown in the Jenkins UI, then run it in your EC2 terminal:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword |
Copy the password output and paste it into the Jenkins UI to unlock it.
🔌 Step 8: Install Plugins and Setup Admin User
● Click **Install suggested plugins
**● Wait for installation to complete
● Create your admin user with username, password, and email
🧩 Step 9: Install Project-Specific Plugins
From the Jenkins dashboard:
Manage Jenkins → Manage Plugins → Available
🔐 Step 10: Add Required Credentials
Go to:
Manage Jenkins → Credentials → Global → Add Credentials
GitHub (Username + Token or SSH Key)
Docker Hub (Username + Password/API token)
SonarQube Token
Nexus Username & Password
🧬 Step 11: Configure Environment Variables
From Jenkins dashboard:
Manage Jenkins → System
Configure Maven in Jenkins
To build and package Java applications from Jenkins pipelines, you need to configure Apache Maven within the Jenkins system settings.
Manage Jenkins → Global Tool Configuration
➕ Add Maven
Click on “Add Maven”.
Set the Name as: maven3 (or any name you prefer).
Check the box: ✅ "Install automatically"
Choose a Maven version to install (e.g., 3.9.6 or latest available).
Configure SonarQube in Jenkins
To enable Jenkins to perform static code analysis using SonarQube, you need to configure the SonarQube server in Jenkins system settings. This integration allows Jenkins jobs to trigger code scans and visualize results inside the Jenkins UI.
Manage Jenkins → Global Tool Configuration
➕ Add SonarQube Scanner
Click on “Add SonarQube Scanner”.
Set the Name as: sonar-token (or any preferred name).
✅ Check the box for “Install automatically”.
Select a SonarQube Scanner version (e.g., latest stable).
🔐 Add So****narQube Server
If you haven't added the SonarQube server yet:
Manage Jenkins → Configure System
Scroll to the SonarQube servers section.
Click “Add SonarQube”.’
Set:
Name: sonar (this will be referenced in pipelines)
Server URL: http://<your-sonarqube-ip>:9000
Authentication Token: Add from Jenkins credentials
Configure Nexus Repository in Jenkins via Maven settings.xml
To enable Jenkins to push artifacts to Nexus or pull dependencies, you need to configure a custom Maven settings.xml file that includes the Nexus repository details and credentials.
Manage Jenkins → Managed files
➕ Add a New Configuration File
Click "Add a new Config"
From the list, choose: Global Maven settings.xml
Set a Name:
○ Example: settings.xmlClick Next
📝 Configure the set****tings.xml Content
Add your customized settings.xml content that includes your Nexus repository information, such as:
settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" |
🐳 Install Docker on Jenkins Server
If Docker is not yet installed:
# Add Docker's official GPG key: |
TO install docker run the following command
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin |
Add Jenkins to Docker group:
sudo usermod -aG docker jenkins |
Restart Jenkins:
sudo systemctl restart jenkins |
Install Trivy (for Security Scanning)
Trivy scans container images for vulnerabilities:
sudo apt install -y wget |
✅ Final Step: Reboot and Verify Jenkins Setup
sudo reboot
Blue-Green Deployment Implementation
Create Jenkins Pipeline for Blue-Gree****n CI/CD Deployment
Now that we’ve set up Jenkins with all necessary integrations (Docker, Maven, SonarQube, Nexus, etc.), we’ll create a Jenkins Pipeline job to automate your Blue-Green Deployment process.
Steps to Create a New Pipeline Job
From the Jenkins dashboard, click “New Item”
Enter the job name:
Example: Blue-Green-CICDSelect “Pipeline”
Click OK
🗂️ Job Configuration
✅ General Section
● Enable: **Discard old builds
**● Set:
○ Max # of builds to keep: 2
This helps keep your Jenkins server clean by retaining only the latest two builds.
💻 Pipeline Definition
Scroll down to the Pipeline section.
Under Definition, select: Pipeline script
Paste your pipeline script into the editor.
Click Save
pipeline { |
🧱 Pipeline Structure & Agent
pipeline { |
📝 Explanation: Runs the pipeline on any available Jenkins agent.
🔧 Tools Block
tools { |
📝 Explanation: Uses maven3 (pre-configured in Jenkins) for Maven-based builds
🔄 Parameters Block
parameters { |
📝 Explanation:
● DEPLOY_ENV: Selects target deployment (blue or green).
● DOCKER_TAG: Image tag to be used.
● SWITCH_TRAFFIC: Boolean toggle to switch live traffic.
🌍 Environment Variables
environment { |
📝 Explanation:
● IMAGE_NAME: Docker Hub image name.
● TAG: Set dynamically from user input.
● SCANNER_HOME: SonarQube scanner tool.
● KUBE_NAMESPACE: Target namespace in EKS.
🧾 Git Checkout
stage('Git Checkout') { |
📝 Explanation: Clones the GitHub repo using provided credentials.
⚙️ Compile
stage('Compile') { |
📝 Explanation: Compiles Java source code using Maven.
✅ Tests
stage('Tests') { |
📝 Explanation: Runs test phase, though tests are skipped (-DskipTests=true).
🔍 Trivy File System Scan
stage('Trivy FS Scan') { |
📝 Explanation: Scans source code for vulnerabilities using Trivy and generates fs.html.
📊 SonarQube Analysis
stage('SonarQube Analysis') { |
📝 Explanation: Runs code analysis with SonarQube.
🛑 Quality Gate
stage('Quality Gate Check') { |
📝 Explanation: Waits for SonarQube's quality gate results.
📦 Maven Build
stage('Build') { |
📝 Explanation: Packages the application into a .jar file.
📤 Deploy Artifact to Nexus
stage('Publish Artifact To Nexus') { |
📝 Explanation: Deploys artifacts to Nexus Repository using a configured settings.xml.
🐳 Docker Build
stage('Docker Build & Tag Image') { |
📝 Explanation: Builds and tags the Docker image for deployment.
🔍 Docker Image Scan
stage('Trivy Image Scan') { |
📝 Explanation: Scans the Docker image for known vulnerabilities using Trivy.
📤 Docker Push
stage('Docker Push Image') { |
📝 Explanation: Pushes the built image to Docker Hub.
🛢️ Deploy MySQL
stage('Deploy MySQL Deployment and Service') { |
📝 Explanation: Deploys the MySQL backend to Kubernetes (if not already deployed)
🔧 Deploy App Service (Kubernetes SVC)
stage('Deploy SVC-APP') { |
📝 Explanation: Applies the Kubernetes service for the app if it doesn't already exist.
🚀 Deploy to Kubernetes (Blue or Green)
stage('Deploy to Kubernetes') { |
📝 Explanation: Deploys to either the blue or green environment using the appropriate deployment manifest.
🔁 Switch Traffic Between Blue/Green
stage('Switch Traffic Between Blue & Green Environment') { |
📝 Explanation: If SWITCH_TRAFFIC is enabled, it updates the Kubernetes service to route traffic to the selected version (blue or green).
🔍 Verify Deployment
stage('Verify Deployment') { |
📝 Explanation: Confirms the app pods are running and the service is targeting the correct version.
Triggering the Pipeline Using Build Parameters
Once the pipeline script is fully configured, the final step is to save and run the pipeline using the defined parameters
💾 Save the Pipeline
After pasting the complete pipeline script into the Jenkins editor:
- Scroll down and click Save.
This stores your pipeline configuration under the job name (e.g., Blue-Green-CICD).
▶️ Run the Pipeline Using Build Parameters
Now click Build with Parameters on the left sidebar.
Before the build starts, you’ll be presented with:
● DEPLOY_ENV → Choose either blue or green
● DOCKER_TAG → Select the Docker image tag (same as environment)
● SWITCH_TRAFFIC → Check this to switch live traffic to the selected environment
Click Build to start the CI/CD workflow.
🔁 Example Use Case
● On the first deployment, choose:
DEPLOY_ENV = blue |
Accessing the Application via AWS Load Balancer
Once your Jenkins pipeline has successfully deployed the application to the EKS cluster, it will be exposed using a Kubernetes Service of type LoadBalancer. This automatically creates an AWS Load Balancer with a public DNS endpoint.
🧭 How to Get the Load Balancer URL
To find the DNS name of the Load Balancer from your terminal, run:
kubectl get all -n webapps |
or more specifically:
kubectl get svc -n webapps |
Look for the service named bankapp-service (or whatever you named it), and check the EXTERNAL-IP or LoadBalancer Ingress column. You’ll see an AWS DNS URL like:
ab3346b97430849d2a587cb8b2c8c638-1815145407.us-east-1.elb.amazonaws.com |
🌐 Open in Browser
Take the DNS URL from the terminal and paste it into your web browser:
http://ab3346b97430849d2a587cb8b2c8c638-1815145407.us-east-1.elb.amazonaws.com
You should now see your application dashboard live in the browser, served from your selected deployment environment (blue or green)
On the second deployment (to green), choose:
DEPLOY_ENV = green |
This will:
● Deploy the new version to the green environment
● Then switch traffic from blue to green
✅ From now onward, all deployments can be managed dynamically using these pipeline parameters, allowing you to alternate between blue and green environments and maintain zero-downtime releases.
Now we will check the nexus artifact where our build repository is stored or not. We will login nexus server and will click on browse. As per nexus configuration we will check the snapshots option where we should see the build artifacts.
❓ Why Use Blue-Green Deployment in This Project?
We use Blue-Green Deployment to ensure zero-downtime releases and enable safe rollbacks. By maintaining two identical environments (Blue and Green), we can:
● Deploy new changes to the inactive environment (e.g., Green)
● Test thoroughly without affecting live users
● Instantly switch traffic once verified
● Roll back easily by reverting traffic if an issue occurs
This approach provides high availability, minimized risk, and seamless updates, making it ideal for production deployments in Kubernetes.
Verify Nexus Repository for Build Artifacts
After a successful pipeline run, Maven should have deployed the generated .jar or .war files to your configured Nexus repository (usually a snapshot or release repo). Let’s verify that the artifacts are stored correctly.
**Login to Nexus
** Open your Nexus server in a browser:
http://<your-nexus-server-ip>:8081 |
Login using your admin credentials (or any authorized user).
Navigate to "Browse"
On the left sidebar, click:
Browse → Repositories
Select the Correct Repository
Based on your Maven configuration (from settings.xml), click on:
maven-snapshots
Verify Artifact Path
Navigate through the folders by your groupId / artifactId / version path. For example, if your project uses:
🏁 Conclusion
In this blog, we walked through the complete setup of a production-grade Blue-Green CI/CD pipeline from scratch using industry-standard tools like Jenkins, Docker, SonarQube, Nexus, and Amazon EKS (Kubernetes).
We covered:
Provisioning cloud infrastructure using Terraform
Installing and integrating essential DevOps tools
Building a flexible Jenkins pipeline with build parameters
Performing code quality checks with SonarQube
Ensuring security using Trivy scans
Managing artifacts through Nexus Repository Manager
Executing Blue-Green Deployments with manual traffic switching
Verifying deployments through **AWS Load Balancer
**
This pipeline not only enables zero-downtime deployments but also enforces code quality, artifact management, and security scanning—all essential elements of a modern DevOps lifecycle.
Whether you're deploying microservices or monoliths, this approach gives your team confidence in releases, rollback capabilities, and continuous delivery at scale.
Subscribe to my newsletter
Read articles from Subroto Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Subroto Sharma
Subroto Sharma
I'm a passionate and results-driven DevOps Engineer with hands-on experience in automating infrastructure, optimizing CI/CD pipelines, and enhancing software delivery through modern DevOps and DevSecOps practices. My expertise lies in bridging the gap between development and operations to streamline workflows, increase deployment velocity, and ensure application security at every stage of the software lifecycle. I specialize in containerization with Docker and Kubernetes, infrastructure-as-code using Terraform, and managing scalable cloud environments—primarily on AWS. I’ve worked extensively with tools like Jenkins, GitHub Actions, SonarQube, Trivy, and various monitoring/logging stacks to build secure, efficient, and resilient systems. Driven by automation and a continuous improvement mindset, I aim to deliver value faster and more reliably by integrating cutting-edge tools and practices into development pipelines.