Zero-Downtime Delivery with Blue-Green Deployment on AWS EKS

Subroto SharmaSubroto Sharma
30 min read

Git Repository: https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD

In today's fast-paced DevOps world, delivering high-quality applications with minimal downtime is a key business requirement. Blue-Green Deployment has emerged as a popular strategy to achieve zero-downtime releases by maintaining two separate production environments—Blue (current live) and Green (new version).

This comprehensive guide will walk you through the fundamentals and practical implementation of a Production-Ready CI/CD Pipeline using the Blue-Green Deployment strategy. Whether you're deploying a microservice on AWS ECS, Kubernetes, or using Jenkins and Docker, this guide will help you understand how to build a resilient deployment workflow that reduces risk, increases reliability, and enhances user experience.

Infrastructure Setup

Step 1: Provisioning EC2 Infrastructure for CI/CD Pipeline

To build a robust and scalable Blue-Green Deployment CI/CD pipeline, we'll launch four dedicated EC2 instances, each with a specific purpose. These servers will be deployed on Ubuntu 24.04 LTS, with sufficient storage and public network access.

EC2 Server Overview

Server NamePurposeOS VersionStoragePublic AccessRole
Master ServerOrchestrationUbuntu 24.0425 GBYesCentral admin node
Jenkins ServerCI/CD AutomationUbuntu 24.0425 GBYesPipeline runner
SonarQube ServerCode QualityUbuntu 24.0425 GBYesStatic analysis
Nexus ServerArtifact StorageUbuntu 24.0425 GBYesBinary repository

Security Configuration

🔐 Security Group Configuration

Before launching these servers, you must create a Security Group with access to required ports:

✅ Inbound Rules:

PortProtocolPurpose
22TCPSSH access (manual)
80TCPHTTP traffic
443TCPHTTPS traffic
8080TCPJenkins, Nexus
8081TCPNexus Repository
9000TCPSonarQube Dashboard

🔒 Security Tip: You ca****n restrict SSH access (p****ort 22) to your own IP for enhanced security.

🔑 Create a Key Pair

To securely connect to each instance via SSH:

  1. Go to the EC2 Dashboard > Key Pairs

  2. Click Create key pair

  3. Set a name like ci-cd-key

  4. Choose RSA or ED25519, then download the .pem file securely

  5. Keep this file safe and private—use it when connecting to the servers

Launch EC2 Instances

🚀 Launch EC2 Instances (Repeat for Each Server)

For each server, follow these steps

· Go to EC2 DashboardLaunch Instance

· Select Ubuntu 24.04 LTS AMI

· Choose instance type (e.g., t3.medium for Jenkins, t2.medium for others)

· Attach public subnet

· Set storage to 25 GB

· Assign the security group created above

· Use the key pair created earlier

· Tag each instance appropriately:

1. Name: Master Server

2. Name: Jenkins Server

3. Name: SonarQube Server

4. Name: Nexus Server

Once these instances are up and running, you'll have a strong foundation for building your CI/CD pipeline with automated build, test, quality check, and deployment workflows.

Tool Installation and Configuration

🗂️ Step-by-Step: Installing and Accessing Nexus on an EC2 Server via Docker

In this section, we’ll install and run Nexus Repository Manager inside a Docker container on your EC2 instance. Nexus will serve as your private artifact repository for storing Maven, npm, Docker, and other packages used in your DevOps pipeline.

🔐 Step 1: SSH into the Nexus Server

Use your .pem key and the server’s public IP to connect via SSH:

ssh -i /path/to/your-key.pem ubuntu@<your-public-ip>

🔄 Step 2: Update the Server

Before installing any packages, update your package repositories

sudo apt update && sudo apt upgrade -y

🐳 Step 3: Install Docker

Install Docker to run Nexus as a container:

sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker

👤 Step 4: Create a Docker User (Non-root)

It’s a good practice not to run containers as root. Add your Ubuntu user to the Docker group:

sudo usermod -aG docker ubuntu

🔁 Log out and log back in (or run newgrp docker) for changes to take effect.

📦 Step 5: Run the Nexus Docker Container

docker run -d -p 8081:8081 --name nexus sonatype/nexus3

🌐 Step 6: Access Nexus in Browser

Open your browser and navigate to:

http://<your-ec2-public-ip>:8081

You’ll see the Nexus login page.

🔑 Step 7: Get the Default Admin Password

The default username is admin, but to retrieve the password, you need to enter the running container:

docker exec -it nexus /bin/bash
cd sonartype-work/

🔁 Step 8: Login and Change the Password

● Go back to your browser
Login with:

Username: admin

Password: *(paste the value from above)

*● Follow the prompt to set your own secure password

Now Nexus is ready to use as your private artifact repository! 🎉

🔎 Step-by-Step: Installing SonarQube on EC2 Using Docker

SonarQube is a powerful tool for continuous code quality inspection. It can analyze code for bugs, vulnerabilities, and code smells. In this guide, we'll set up SonarQube using Docker on an EC2 instance.

🔐 Step 1: SSH into the SonarQube Server

Use your .pem key and public IP to log in:

ssh -i /path/to/your-key.pem ubuntu@<your-public-ip>

🔄 Step 2: Update the Server

Always begin by updating system packages:

sudo apt update && sudo apt upgrade -y

🐳 Step 3: Install Docker

SonarQube will run inside a Docker container. Install Docker using:

sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker

👤 Step 4: Create a Docker User (Non-root)

Avoid running Docker as the root user. Add your current user (e.g., ubuntu) to the Docker group:

sudo usermod -aG docker ubuntu

🔁 Log out and log back in (or run newgrp docker) for changes to take effect.

📦 Step 5: Run the SonarQube Docker Container

Now, run the SonarQube container:

docker run -d --name sonarqube -p 9000:9000  sonarqube:latest

This will:

● Run SonarQube detached

● Map the default web interface to port 9000

🌐 Step 6: Access SonarQube in Your Browser

http://<your-ec2-public-ip>:9000

You should see the SonarQube login page.

🔐 Step 7: Default Credentials

Use the following to log in:

Username: admin

Password: admin

🔁 Step 8: Change the Default Password

After logging in for the first time, SonarQube will prompt you to change the default password. Enter a secure password of your choice to continue.

🔐 How to Generate a SonarQube Token and Set Up a Webhook

To integrate SonarQube with Jenkins (or any CI/CD tool), you'll need an authentication token and a webhook for real-time Quality Gate feedback.

🔑 Step 1: Generate SonarQube Token

  1. Login to your SonarQube dashboard.

  2. On the top right, click on your user avatarMy Account.

  3. Navigate to the "Security" tab.

  4. In the "Generate Tokens" section:

Name: Enter a descriptive name (e.g., jenkins-token).
○ Click "Generate".

  1. Copy the token and save it securely (you won’t be able to see it again).

✅ Use this token as a Secret Text credential in Jenkins under:
Manage Jenkins → Credentials → Global → Add Credentials

🔁 Step 2: Configure Webhook in SonarQube

Webhooks allow SonarQube to send Quality Gate status back to Jenkins.

  1. From the SonarQube main dashboard, go to:

Projects → [Select your project]

2. Click on the "Project Settings" (gear icon).

3. Go to "Webhooks".

4. Click "Create".

Name: Jenkins
URL:

http://<your-jenkins-server>:<port>/sonarqube-webhook/

Click "Save".

EKS Cluster Provisioning

🖥️ Post-Launch Setup on the EKS Server

After launching the EC2 instance that serves as the EKS Server, we will connect to it using the public IP address and the .pem key file. Once connected, perform the following setup steps to prepare the server for provisioning the EKS cluster:

🔧 Tasks to Perform on the Master Server:

Update the OS Package Repository

sudo apt update -y

Install AWC cli on this service

sudo apt update -y && \
sudo apt install -y unzip curl && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install && \
aws --version

Terraform Installation

sudo apt update -y && \
sudo apt install -y unzip curl && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install && \
aws --version

Create an IAM User with Access & Secret Key

To securely interact with AWS services using the AWS CLI, Terraform, or CI/CD tools like Jenkins, we need to create a dedicated IAM user with programmatic access.

Follow these steps to create an IAM user with Access Key ID and Secret Access Key:

Create IAM User with Programmatic Access

  1. Go to the AWS Management Console

  2. Navigate to IAM > Users

  3. Click Add users

Configuration Steps

  1. User Details

    • User name: Blue-Green-Cred

    • Access type: ✅ Programmatic access

  2. Set Permissions

    • Choose Attach policies directly

    • Select AdministratorAccess

  3. Tags (Optional)

    • Key: Project

    • Value: Blue-Green-CICD

  4. Review and Create

    • Review the configuration and click Create user

Save Your Access Credentials

  • Access Key ID

  • Secret Access Key

Click Download .csv and store it securely.

Configure AWS CLI with Access & Secret Key

After creating an IAM user with programmatic access, the next step is to configure the AWS CLI on your EC2 server.

🔧 Run aws configure

On your EC2 instance (Ubuntu 24.04), open the terminal and run:

aws configure

📝 When prompted, enter the following:

aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json

Replace <Your Access Key ID> and <Your Secret Access Key> with the credentials you downloaded earlier.

Breakdown of Terraform Code for EKS Cluster Provisioning

But before we launch the infrastructure, it's important to understand the Terraform code we’re about to apply.

This review will give you clarity on how the EKS cluster is being configured — including VPC setup, subnets, IAM roles, security groups, and the actual cluster and node group definitions.

Taking a moment to walk through the code ensures:

● You know what resources will be created
● You can identify any region-specific or account-specific values
● You ensure secure and optimized resource configurations

In the next section, we'll break down each major block of the Terraform script and then proceed with initialization and deployment.

📌 1. Provider Configuration

​​provider "aws" {
  region = "us-east-1"
}

This tells Terraform to use the AWS provider and deploy resources in the us-east-1 region.

2. VPC & Subnets

resource "aws_vpc" "DevOpsSubroto_vpc" {
  cidr_block = "10.0.0.0/16"
}

Creates a custom Virtual Private Cloud (VPC) with a /16 CIDR block. This will serve as the networking layer for all your AWS resources.

resource "aws_subnet" "DevOpsSubroto_subnet" {
  count = 2
  cidr_block = cidrsubnet(...)
  availability_zone = element(["us-east-1a", "us-east-1b"], count.index)
}

Creates two public subnets, one in each availability zone (us-east-1a and us-east-1b). They are assigned unique CIDR ranges using cidrsubnet().

3. Internet Gateway & Route Table

resource "aws_internet_gateway" "DevOpsSubroto_igw" { ... }

Defines a route table with a default route (0.0.0.0/0) pointing to the IGW.

resource "aws_route_table_association" "a" {
  count = 2
}

Associates the route table with each subnet, making them public subnets.

🔐 4. Security Groups

resource "aws_security_group" "DevOpsSubroto_cluster_sg" { ... }

Creates a security group for the EKS control plane with open egress (outbound) rules.

resource "aws_security_group" "DevOpsSubroto_node_sg" { ... }

Creates a security group for worker nodes with open ingress and egress. This should be tightened in production for better security.

☸️ 5. EKS Cluster and Node Group

resource "aws_eks_cluster" "DevOpsSubroto" {
  role_arn = aws_iam_role.DevOpsSubroto_cluster_role.arn
}

Provisions the EKS control plane, specifying:

● VPC subnets
● IAM role for EKS
● Security group for the control plane

resource "aws_eks_node_group" "DevOpsSubroto" {
  desired_size = 3
  instance_types = ["t2.large"]
  remote_access { ... }
}

Provisions a node group with:

● 3 EC2 instances of type t2.large
● SSH access enabled
● Role for EKS worker nodes
● Public subnet association

🔐 6. IAM Roles and Policies

resource "aws_iam_role" "DevOpsSubroto_cluster_role" { ... }

Creates an IAM role assumed by the EKS control plane, allowing it to manage AWS resources.

resource "aws_iam_role_policy_attachment" "DevOpsSubroto_cluster_role_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}

Attaches the required AmazonEKSClusterPolicy to the control plane role

🔧 Node Group IAM Role

resource "aws_iam_role" "DevOpsSubroto_node_group_role" { ... }

Creates an IAM role for EC2 worker nodes with the following permissions:

● AmazonEKSWorkerNodePolicy: Core EKS node functionality
● AmazonEKS_CNI_Policy: Allows Kubernetes networking
● AmazonEC2ContainerRegistryReadOnly: Pull images from ECR

main.tf

provider "aws" {
  region = "us-east-1"
}

resource "aws_vpc" "DevOpsSubroto_vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "DevOpsSubroto-vpc"
  }
}

resource "aws_subnet" "DevOpsSubroto_subnet" {
  count = 2
  vpc_id                  = aws_vpc.DevOpsSubroto_vpc.id
  cidr_block              = cidrsubnet(aws_vpc.DevOpsSubroto_vpc.cidr_block, 8, count.index)
  availability_zone       = element(["us-east-1a", "us-east-1b"], count.index)
  map_public_ip_on_launch = true

  tags = {
    Name = "DevOpsSubroto-subnet-${count.index}"
  }
}

resource "aws_internet_gateway" "DevOpsSubroto_igw" {
  vpc_id = aws_vpc.DevOpsSubroto_vpc.id

  tags = {
    Name = "DevOpsSubroto-igw"
  }
}

resource "aws_route_table" "DevOpsSubroto_route_table" {
  vpc_id = aws_vpc.DevOpsSubroto_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.DevOpsSubroto_igw.id
  }

  tags = {
    Name = "DevOpsSubroto-route-table"
  }
}

resource "aws_route_table_association" "a" {
  count          = 2
  subnet_id      = aws_subnet.DevOpsSubroto_subnet[count.index].id
  route_table_id = aws_route_table.DevOpsSubroto_route_table.id
}

resource "aws_security_group" "DevOpsSubroto_cluster_sg" {
  vpc_id = aws_vpc.DevOpsSubroto_vpc.id

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "DevOpsSubroto-cluster-sg"
  }
}

resource "aws_security_group" "DevOpsSubroto_node_sg" {
  vpc_id = aws_vpc.DevOpsSubroto_vpc.id

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "DevOpsSubroto-node-sg"
  }
}

resource "aws_eks_cluster" "DevOpsSubroto" {
  name     = "DevOpsSubroto-cluster"
  role_arn = aws_iam_role.DevOpsSubroto_cluster_role.arn

  vpc_config {
    subnet_ids         = aws_subnet.DevOpsSubroto_subnet[].id
    security_group_ids = [aws_security_group.DevOpsSubroto_cluster_sg.id]
  }
}

resource "aws_eks_node_group" "DevOpsSubroto" {
  cluster_name    = aws_eks_cluster.DevOpsSubroto.name
  node_group_name = "DevOpsSubroto-node-group"
  node_role_arn   = aws_iam_role.DevOpsSubroto_node_group_role.arn
  subnet_ids      = aws_subnet.DevOpsSubroto_subnet[
].id

  scaling_config {
    desired_size = 3
    max_size     = 3
    min_size     = 3
  }

  instance_types = ["t2.large"]

  remote_access {
    ec2_ssh_key = var.ssh_key_name
    source_security_group_ids = [aws_security_group.DevOpsSubroto_node_sg.id]
  }
}

resource "aws_iam_role" "DevOpsSubroto_cluster_role" {
  name = "DevOpsSubroto-cluster-role"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "DevOpsSubroto_cluster_role_policy" {
  role       = aws_iam_role.DevOpsSubroto_cluster_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}

resource "aws_iam_role" "DevOpsSubroto_node_group_role" {
  name = "DevOpsSubroto-node-group-role"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "DevOpsSubroto_node_group_role_policy" {
  role       = aws_iam_role.DevOpsSubroto_node_group_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

resource "aws_iam_role_policy_attachment" "DevOpsSubroto_node_group_cni_policy" {
  role       = aws_iam_role.DevOpsSubroto_node_group_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "DevOpsSubroto_node_group_registry_policy" {
  role       = aws_iam_role.DevOpsSubroto_node_group_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

Terraform Variables – variables.tf

To make our Terraform code more reusable and configurable, we define variables in a separate file called variables.tf. One such variable is the SSH key name, which enables secure remote access to your EC2 worker nodes.

variables.tf

variable "ssh_key_name" {
  description = "The name of the SSH key pair to use for instances"
  type        = string
  default     = "SecOps-Key"
}

Terraform Outputs – output.tf

Once your Terraform configuration is applied, it's important to extract and display key resource details. This is where the output.tf file comes in.

Terraform outputs allow you to view important resource identifiers that you can reuse for:

● Debugging
● Interconnecting with other modules
● Future automation and scripting

output "cluster_id" {
    value = aws_eks_cluster.DevOpsSubroto.id
  }
 
  output "node_group_id" {
    value = aws_eks_node_group.DevOpsSubroto.id
  }
 
  output "vpc_id" {
    value = aws_vpc.DevOpsSubroto_vpc.id
  }
 
  output "subnet_ids" {
    value = aws_subnet.DevOpsSubroto_subnet[*].id
  }

Clone Your Git Repository into the Server

Now that your EC2 instance is configured with AWS CLI and other essentials, the next step is to clone your Git repository. Use the following command to clone your repository. Replace the URL with your actual Git repository:

git clone https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD.git

Provisioning the EKS Cluster with Terraform

After cloning the Git repository, we’ll navigate into the project directory that contains the Terraform configuration files to provision the EKS cluster.

We are using Terraform to launch and manage the EKS infrastructure on AWS.

To move into the directory where your Terraform code is located, run the following command:

cd BlueBlue-Green-Deployment-with-CICD
cd Cluster

Now it's time to provision the EKS cluster using Terraform.

🔧 1. Initialize Terraform

terraform init

This command initializes your working directory. It:

● Downloads the AWS provider plugin
● Prepares the backend (if configured)
● Sets up required modules

🧪 2. Validate Terraform Code

terraform validate

This checks whether your configuration files are syntactically valid.

📝 3. Preview Infrastructure Changes

terraform plan

This command gives you a dry run of what Terraform will do when applied:

● What resources will be created, modified, or destroyed
● Helps avoid unintended changes

⚙️ 4. Apply the Infrastructure

terraform apply —-auto-approve

After run the command it will take 8-10 minute to provision the server

Verifying the EKS Cluster and Nodes

Now that we've applied the Terraform configuration, it’s time to verify whether the EKS cluster and its associated worker nodes have been created successfully.

✅ 1. Install ku****bectl

To interact with your Kubernetes cluster, you need the Kubernetes CLI tool—kubectl. Install it using:

sudo snap install kubectl --classic

📡 2. Try Fetching the Node List

After installation, try running:

kubectl get nodes

However, you’ll notice that no nodes or cluster information is displayed. That’s because your system is not yet connected to the EKS cluster.

🔗 3. Update Your Kubeconfig File

To connect kubectl with your EKS cluster, update the kubeconfig file using the AWS CLI: the AWS CLI:

aws eks --region us-east-1 update-kubeconfig --name DevOpsSubroto-cluster

🔄 Replace us-east-1 with your actual region and DevOpsSubroto-cluster with your EKS cluster name, if different.

📋 4. Re-check Cluster Status

After updating the kubeconfig file, run:

kubectl get nodes

You should now see a list of worker nodes in Ready status, confirming that your EKS cluster is up and running

Create Service Account, Role, ClusterRole & Assign that role, And create a secret for Service Account and genrate a Token

🔧 Creating a ServiceAccount for Jenkins in Kubernetes

To allow Jenkins to interact with your Kubernetes cluster—for example, to deploy applications or manage resources via a pipeline—you need to create a ServiceAccount.

A ServiceAccount provides an identity for processes running in a Pod, enabling secure API access without relying on user credentials.

jenkins-serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: webapps

To create this ServiceAccount in your cluster, run:

kubectl apply -f jenkins-serviceaccount.yaml

You can verify creation with:

kubectl get serviceaccount jenkins -n webapps

🔐 Defining Kubernetes Role for Jenkins

To allow Jenkins to deploy applications and interact with Kubernetes resources securely, we need to assign specific permissions using RBAC (Role-Based Access Control). Below is a Kubernetes Role that grants wide access to various resources in the webapps namespace.

jenkins-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind
: Role
metadata:
  name: app-role
  namespace: webapps
rules:
  - apiGroups:
      - ""
      - apps
      - autoscaling
      - batch
      - extensions
      - policy
      - rbac.authorization.k8s.io
    resources:
      - pods
      - secrets
      - componentstatuses
      - configmaps
      - daemonsets
      - deployments
      - events
      - endpoints
      - horizontalpodautoscalers
      - ingresses
      - jobs
      - limitranges
      - namespaces
      - nodes
      - persistentvolumes
      - persistentvolumeclaims
      - replicasets
      - replicationcontrollers
      - resourcequotas
      - serviceaccounts
      - services
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete

To apply jenkins-role.yaml

kubectl apply -f jenkins-role.yaml

This Role grants broad access to many resources. It’s suitable for a CI/CD tool like Jenkins, but in production environments, you may want to restrict resources and verbs to follow the principle of least privilege.

🔗 Binding the Role to Jenkins with a RoleBinding

Now that we’ve created a Role with the necessary Kubernetes permissions, the next step is to bind that Role to the Jenkins ServiceAccount. This is done using a RoleBinding, which links a Role to a user, group, or ServiceAccount within the same namespace.

jenkins-rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-rolebinding
  namespace: webapps
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: app-role
subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: webapps

Apply the RoleBinding

kubectl apply -f jenkins-rolebinding.yaml

You can confirm the RoleBinding with:

kubectl get rolebinding app-rolebinding -n webapps

With this RoleBinding in place, your Jenkins pods running in the webapps namespace can now access Kubernetes resources as defined in the app-role.

🔐 Creating a Secret Token for a ServiceAccount

To enable an external system (like Jenkins) to authenticate with your Kubernetes cluster using a ServiceAccount, you can manually create a Secret of type kubernetes.io/service-account-token.

This secret contains a token and certificate that can be used for API access.

serviceaccount-token-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mysecretname
  annotations:
    kubernetes.io/service-account.name: jenkins
type: kubernetes.io/service-account-token

✅ Apply the Secret

kubectl apply -f serviceaccount-token-secret.yaml

Add Kubernetes Secret Token to Jenkins Credentials

Now that we’ve created a service account token as a Kubernetes Secret, the next step is to copy the token and add it to Jenkins as a global credential. This token allows Jenkins to authenticate and deploy workloads into the EKS cluster.

Jenkins CI/CD Pipeline

⚙️ Step-by-Step: Jenkins Installation and DevOps Configuration on EC2

In this section, we’ll set up a Jenkins CI/CD server on an EC2 instance. Jenkins will serve as the automation hub for building, testing, analyzing, and deploying applications using Docker, Nexus, and SonarQube.

🔐 Step 1: SSH into the Jenkins Server

Connect to your EC2 instance using the .pem key and public IP:

ssh -i /path/to/your-key.pem ubuntu@<your-public-ip>

🔄 Step 2: Update the Server

Update package lists and install available updates:

sudo apt update && sudo apt upgrade -y

☕ Step 3: Install Java 17

Jenkins requires Java to run. Install Java 17 (recommended):

sudo apt install -y openjdk-17-jdk
java -version

🛠️ Step 4: Install Jenkins

Add Jenkins repository and install:

wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian binary/ > /etc/apt/sources.list.d/jenkins.list'

sudo apt update
sudo apt install -y jenkins

👤 Step 5: Add Jenkins User to Docker Group

If Docker is already installed (or will be), add the Ubuntu user to the Jenkins group:

sudo usermod -aG jenkins ubuntu

🌐 Step 6: Access Jenkins via Browser

Open Jenkins in your browser:

http://<your-ec2-public-ip>:8080

You'll see the initial unlock screen.

🔑 Step 7: Retrieve Jenkins Initial Admin Password

Copy the command shown in the Jenkins UI, then run it in your EC2 terminal:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy the password output and paste it into the Jenkins UI to unlock it.

🔌 Step 8: Install Plugins and Setup Admin User

● Click **Install suggested plugins
**● Wait for installation to complete

● Create your admin user with username, password, and email

🧩 Step 9: Install Project-Specific Plugins

From the Jenkins dashboard:

Manage Jenkins → Manage Plugins → Available

🔐 Step 10: Add Required Credentials

Go to:

Manage Jenkins → Credentials → Global → Add Credentials

GitHub (Username + Token or SSH Key)

Docker Hub (Username + Password/API token)

SonarQube Token

Nexus Username & Password

🧬 Step 11: Configure Environment Variables

From Jenkins dashboard:

Manage Jenkins → System

Configure Maven in Jenkins

To build and package Java applications from Jenkins pipelines, you need to configure Apache Maven within the Jenkins system settings.

Manage Jenkins → Global Tool Configuration

➕ Add Maven

  1. Click on “Add Maven”.

  2. Set the Name as: maven3 (or any name you prefer).

  3. Check the box: ✅ "Install automatically"

  4. Choose a Maven version to install (e.g., 3.9.6 or latest available).

Configure SonarQube in Jenkins

To enable Jenkins to perform static code analysis using SonarQube, you need to configure the SonarQube server in Jenkins system settings. This integration allows Jenkins jobs to trigger code scans and visualize results inside the Jenkins UI.

Manage Jenkins → Global Tool Configuration

➕ Add SonarQube Scanner

  1. Click on “Add SonarQube Scanner”.

  2. Set the Name as: sonar-token (or any preferred name).

  3. ✅ Check the box for “Install automatically”.

  4. Select a SonarQube Scanner version (e.g., latest stable).

🔐 Add So****narQube Server

If you haven't added the SonarQube server yet:

Manage Jenkins → Configure System

Scroll to the SonarQube servers section.

Click “Add SonarQube”.’

Set:

  1. Name: sonar (this will be referenced in pipelines)

  2. Server URL: http://<your-sonarqube-ip>:9000

  3. Authentication Token: Add from Jenkins credentials

Configure Nexus Repository in Jenkins via Maven settings.xml

To enable Jenkins to push artifacts to Nexus or pull dependencies, you need to configure a custom Maven settings.xml file that includes the Nexus repository details and credentials.

Manage Jenkins → Managed files

➕ Add a New Configuration File

  1. Click "Add a new Config"

  2. From the list, choose: Global Maven settings.xml

  3. Set a Name:
    ○ Example: settings.xml

  4. Click Next

📝 Configure the set****tings.xml Content

Add your customized settings.xml content that includes your Nexus repository information, such as:

settings.xml

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">

  <servers>
    <server>
      <id>maven-releases</id>
      <username>${NEXUS_USERNAME}</username>
      <password>${NEXUS_PASSWORD}</password>
    </server>
    <server>
      <id>maven-snapshots</id>
      <username>${NEXUS_USERNAME}</username>
      <password>${NEXUS_PASSWORD}</password>
    </server>
    <server>
      <id>nexus</id>
      <username>${NEXUS_USERNAME}</username>
      <password>${NEXUS_PASSWORD}</password>
    </server>
    <server>
      <id>internal-nexus</id>
      <username>${NEXUS_USERNAME}</username>
      <password>${NEXUS_PASSWORD}</password>
    </server>
  </servers>

  <mirrors>
    <mirror>
      <id>internal-nexus</id>
      <mirrorOf>*</mirrorOf>
      <name>Nexus Mirror</name>
      <url>${NEXUS_URL}/repository/maven-public/</url>
    </mirror>
  </mirrors>

  <profiles>
    <profile>
      <id>nexus-profile</id>
      <repositories>
        <repository>
          <id>central</id>
          <url>${NEXUS_URL}/repository/maven-public/</url>
          <releases>
            <enabled>true</enabled>
          </releases>
          <snapshots>
            <enabled>true</enabled>
          </snapshots>
        </repository>
      </repositories>
      <pluginRepositories>
        <pluginRepository>
          <id>central</id>
          <url>${NEXUS_URL}/repository/maven-public/</url>
          <releases>
            <enabled>true</enabled>
          </releases>
          <snapshots>
            <enabled>true</enabled>
          </snapshots>
        </pluginRepository>
      </pluginRepositories>
    </profile>
  </profiles>

  <activeProfiles>
    <activeProfile>nexus-profile</activeProfile>
  </activeProfiles>
</settings>

🐳 Install Docker on Jenkins Server

If Docker is not yet installed:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

TO install docker run the following command

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Add Jenkins to Docker group:

sudo usermod -aG docker jenkins

Restart Jenkins:

sudo systemctl restart jenkins

Install Trivy (for Security Scanning)

Trivy scans container images for vulnerabilities:

sudo apt install -y wget
wget https://github.com/aquasecurity/trivy/releases/latest/download/trivy_0.50.0_Linux-64bit.deb
sudo dpkg -i trivy_0.50.0_Linux-64bit.deb

✅ Final Step: Reboot and Verify Jenkins Setup

sudo reboot

Blue-Green Deployment Implementation

Create Jenkins Pipeline for Blue-Gree****n CI/CD Deployment

Now that we’ve set up Jenkins with all necessary integrations (Docker, Maven, SonarQube, Nexus, etc.), we’ll create a Jenkins Pipeline job to automate your Blue-Green Deployment process.

Steps to Create a New Pipeline Job

  1. From the Jenkins dashboard, click “New Item”

  2. Enter the job name:
    Example: Blue-Green-CICD

  3. Select “Pipeline”

  4. Click OK

🗂️ Job Configuration

✅ General Section

● Enable: **Discard old builds
**● Set:

Max # of builds to keep: 2

This helps keep your Jenkins server clean by retaining only the latest two builds.

💻 Pipeline Definition

Scroll down to the Pipeline section.

  1. Under Definition, select: Pipeline script

  2. Paste your pipeline script into the editor.

  3. Click Save

pipeline {
    agent any

    tools {
        maven 'maven3'
    }

    parameters {
        choice(name: 'DEPLOY_ENV', choices: ['blue', 'green'], description: 'Choose which environment to deploy: Blue or Green')
        choice(name: 'DOCKER_TAG', choices: ['blue', 'green'], description: 'Choose the Docker image tag for the deployment')
        booleanParam(name: 'SWITCH_TRAFFIC', defaultValue: false, description: 'Switch traffic between Blue and Green')
    }

    environment {
        IMAGE_NAME = "subrotosharma/bankapp"
        TAG = "${params.DOCKER_TAG}"
        SCANNER_HOME = tool 'sonar-scanner'
        KUBE_NAMESPACE = 'webapps'
    }

    stages {
        stage('Git Checkout') {
            steps {
                git branch: 'main', credentialsId: 'git-cred', url: 'https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD.git'
            }
        }

        stage('Compile') {
            steps {
                sh "mvn compile"
            }
        }

        stage('Tests') {
            steps {
                sh "mvn test -DskipTests=true"
            }
        }

        stage('Trivy FS Scan') {
            steps {
                sh "trivy fs --format table -o fs.html ."
            }
        }

        stage('SonarQube Analysis') {
            steps {
                withSonarQubeEnv('sonar') {
                    sh "$SCANNER_HOME/bin/sonar-scanner -Dsonar.projectKey=Multitier -Dsonar.projectName=Multitier -Dsonar.java.binaries=target"
                }
            }
        }

        stage('Quality Gate Check') {
            steps {
                timeout(time: 1, unit: 'HOURS') {
                    waitForQualityGate abortPipeline: false
                }
            }
        }

        stage('Build') {
            steps {
                sh "mvn package -DskipTests=true"
            }
        }

        stage('Publish Artifact To Nexus') {
            steps {
                withMaven(
                    maven: 'maven3',
                    globalMavenSettingsConfig: 'settings.xml'  // Match ID from Jenkins managed files
                ) {
                    sh 'mvn deploy -DskipTests=true'
                }
            }
        }

        stage('Docker Build & Tag Image') {
            steps {
                script {
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh "docker build -t ${IMAGE_NAME}:${TAG} ."
                    }
                }
            }
        }

        stage('Trivy Image Scan') {
            steps {
                sh "trivy image --format table -o image-scan.html ${IMAGE_NAME}:${TAG}"
            }
        }

        stage('Docker Push Image') {
            steps {
                script {
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh "docker push ${IMAGE_NAME}:${TAG}"
                    }
                }
            }
        }

        stage('Deploy MySQL Deployment and Service') {
            steps {
                script {
                    withKubeConfig(
                        credentialsId: 'k8s-token',
                        serverUrl: 'https://3869C76F35091F8B57CD09F70E11CD30.gr7.us-east-1.eks.amazonaws.com',
                        namespace: "${KUBE_NAMESPACE}"
                    ) {
                        sh "kubectl apply -f mysql-ds.yml -n ${KUBE_NAMESPACE}"
                    }
                }
            }
        }

        stage('Deploy SVC-APP') {
            steps {
                script {
                    withKubeConfig(
                        credentialsId: 'k8s-token',
                        serverUrl: 'https://3869C76F35091F8B57CD09F70E11CD30.gr7.us-east-1.eks.amazonaws.com',
                        namespace: "${KUBE_NAMESPACE}"
                    ) {
                        sh """
                        if ! kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}; then
                            kubectl apply -f bankapp-service.yml -n ${KUBE_NAMESPACE}
                        fi
                        """
                    }
                }
            }
        }

        stage('Deploy to Kubernetes') {
            steps {
                script {
                    def deploymentFile = (params.DEPLOY_ENV == 'blue') ? 'app-deployment-blue.yml' : 'app-deployment-green.yml'
                    withKubeConfig(
                        credentialsId: 'k8s-token',
                        serverUrl: 'https://3869C76F35091F8B57CD09F70E11CD30.gr7.us-east-1.eks.amazonaws.com',
                        namespace: "${KUBE_NAMESPACE}"
                    ) {
                        sh "kubectl apply -f ${deploymentFile} -n ${KUBE_NAMESPACE}"
                    }
                }
            }
        }

        stage('Switch Traffic Between Blue & Green Environment') {
            when {
                expression { return params.SWITCH_TRAFFIC }
            }
            steps {
                script {
                    def newEnv = params.DEPLOY_ENV
                    withKubeConfig(
                        credentialsId: 'k8s-token',
                        serverUrl: 'https://3869C76F35091F8B57CD09F70E11CD30.gr7.us-east-1.eks.amazonaws.com',
                        namespace: "${KUBE_NAMESPACE}"
                    ) {
                        sh """
                        kubectl patch service bankapp-service -p '{"spec": {"selector": {"app": "bankapp", "version": "${newEnv}"}}}' -n ${KUBE_NAMESPACE}
                        """
                    }
                    echo " Traffic switched to the ${newEnv} environment."
                }
            }
        }

        stage('Verify Deployment') {
            steps {
                script {
                    def verifyEnv = params.DEPLOY_ENV
                    withKubeConfig(
                        credentialsId: 'k8s-token',
                        serverUrl: 'https://3869C76F35091F8B57CD09F70E11CD30.gr7.us-east-1.eks.amazonaws.com',
                        namespace: "${KUBE_NAMESPACE}"
                    ) {
                        sh """
                        kubectl get pods -l version=${verifyEnv} -n ${KUBE_NAMESPACE}
                        kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}
                        """
                    }
                }
            }
        }
    }
}

🧱 Pipeline Structure & Agent

pipeline {
    agent any

📝 Explanation: Runs the pipeline on any available Jenkins agent.

🔧 Tools Block

 tools {
        maven 'maven3'
    }

📝 Explanation: Uses maven3 (pre-configured in Jenkins) for Maven-based builds

🔄 Parameters Block

    parameters {
        choice(name: 'DEPLOY_ENV', choices: ['blue', 'green'], description: 'Choose which environment to deploy: Blue or Green')
        choice(name: 'DOCKER_TAG', choices: ['blue', 'green'], description: 'Choose the Docker image tag for the deployment')
        booleanParam(name: 'SWITCH_TRAFFIC', defaultValue: false, description: 'Switch traffic between Blue and Green')
    }

📝 Explanation:

● DEPLOY_ENV: Selects target deployment (blue or green).

● DOCKER_TAG: Image tag to be used.

● SWITCH_TRAFFIC: Boolean toggle to switch live traffic.

🌍 Environment Variables

    environment {
        IMAGE_NAME = "subrotosharma/bankapp"
        TAG = "${params.DOCKER_TAG}"
        SCANNER_HOME = tool 'sonar-scanner'
        KUBE_NAMESPACE = 'webapps'
    }

📝 Explanation:

● IMAGE_NAME: Docker Hub image name.
● TAG: Set dynamically from user input.
● SCANNER_HOME: SonarQube scanner tool.
● KUBE_NAMESPACE: Target namespace in EKS.

🧾 Git Checkout

        stage('Git Checkout') {
            steps {
                git branch: 'main', credentialsId: 'git-cred', url: 'https://github.com/subrotosharma/Blue-Green-Deployment-with-CICD.git'
            }
        }

📝 Explanation: Clones the GitHub repo using provided credentials.

⚙️ Compile

        stage('Compile') {
            steps {
                sh "mvn compile"
            }
        }

📝 Explanation: Compiles Java source code using Maven.

✅ Tests

        stage('Tests') {
            steps {
                sh "mvn test -DskipTests=true"
            }
        }

📝 Explanation: Runs test phase, though tests are skipped (-DskipTests=true).

🔍 Trivy File System Scan

        stage('Trivy FS Scan') {
            steps {
                sh "trivy fs --format table -o fs.html ."
            }
        }

📝 Explanation: Scans source code for vulnerabilities using Trivy and generates fs.html.

📊 SonarQube Analysis

        stage('SonarQube Analysis') {
            steps {
                withSonarQubeEnv('sonar') {
                    sh "$SCANNER_HOME/bin/sonar-scanner -Dsonar.projectKey=Multitier -Dsonar.projectName=Multitier -Dsonar.java.binaries=target"
                }
            }
        }

📝 Explanation: Runs code analysis with SonarQube.

🛑 Quality Gate

        stage('Quality Gate Check') {
            steps {
                timeout(time: 1, unit: 'HOURS') {
                    waitForQualityGate abortPipeline: false
                }
            }
        }

📝 Explanation: Waits for SonarQube's quality gate results.

📦 Maven Build

        stage('Build') {
            steps {
                sh "mvn package -DskipTests=true"
            }
        }

📝 Explanation: Packages the application into a .jar file.

📤 Deploy Artifact to Nexus

        stage('Publish Artifact To Nexus') {
            steps {
                withMaven(
                    maven: 'maven3',
                    globalMavenSettingsConfig: 'settings.xml'
                ) {
                    sh 'mvn deploy -DskipTests=true'
                }
            }
        }

📝 Explanation: Deploys artifacts to Nexus Repository using a configured settings.xml.

🐳 Docker Build

        stage('Docker Build & Tag Image') {
            steps {
                script {
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh "docker build -t ${IMAGE_NAME}:${TAG} ."
                    }
                }
            }
        }

📝 Explanation: Builds and tags the Docker image for deployment.

🔍 Docker Image Scan

        stage('Trivy Image Scan') {
            steps {
                sh "trivy image --format table -o image-scan.html ${IMAGE_NAME}:${TAG}"
            }
        }

📝 Explanation: Scans the Docker image for known vulnerabilities using Trivy.

📤 Docker Push

        stage('Docker Push Image') {
            steps {
                script {
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh "docker push ${IMAGE_NAME}:${TAG}"
                    }
                }
            }
        }

📝 Explanation: Pushes the built image to Docker Hub.

🛢️ Deploy MySQL

        stage('Deploy MySQL Deployment and Service') {
            steps {
                script {
                    withKubeConfig(...) {
                        sh "kubectl apply -f mysql-ds.yml -n ${KUBE_NAMESPACE}"
                    }
                }
            }
        }

📝 Explanation: Deploys the MySQL backend to Kubernetes (if not already deployed)

🔧 Deploy App Service (Kubernetes SVC)

        stage('Deploy SVC-APP') {
            steps {
                script {
                    withKubeConfig(...) {
                        sh """
                        if ! kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}; then
                            kubectl apply -f bankapp-service.yml -n ${KUBE_NAMESPACE}
                        fi
                        """
                    }
                }
            }
        }

📝 Explanation: Applies the Kubernetes service for the app if it doesn't already exist.

🚀 Deploy to Kubernetes (Blue or Green)

stage('Deploy to Kubernetes') {
            steps {
                script {
                    def deploymentFile = (params.DEPLOY_ENV == 'blue') ? 'app-deployment-blue.yml' : 'app-deployment-green.yml'
                    withKubeConfig(...) {
                        sh "kubectl apply -f ${deploymentFile} -n ${KUBE_NAMESPACE}"
                    }
                }
            }
        }

📝 Explanation: Deploys to either the blue or green environment using the appropriate deployment manifest.

🔁 Switch Traffic Between Blue/Green

        stage('Switch Traffic Between Blue & Green Environment') {
            when {
                expression { return params.SWITCH_TRAFFIC }
            }
            steps {
                script {
                    def newEnv = params.DEPLOY_ENV
                    withKubeConfig(...) {
                        sh """
                        kubectl patch service bankapp-service -p '{"spec": {"selector": {"app": "bankapp", "version": "${newEnv}"}}}' -n ${KUBE_NAMESPACE}
                        """
                    }
                    echo "✅ Traffic switched to the ${newEnv} environment."
                }
            }
        }

📝 Explanation: If SWITCH_TRAFFIC is enabled, it updates the Kubernetes service to route traffic to the selected version (blue or green).

🔍 Verify Deployment

   stage('Verify Deployment') {
            steps {
                script {
                    def verifyEnv = params.DEPLOY_ENV
                    withKubeConfig(...) {
                        sh """
                        kubectl get pods -l version=${verifyEnv} -n ${KUBE_NAMESPACE}
                        kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}
                        """
                    }
                }
            }
        }
    }
}

📝 Explanation: Confirms the app pods are running and the service is targeting the correct version.

Triggering the Pipeline Using Build Parameters

Once the pipeline script is fully configured, the final step is to save and run the pipeline using the defined parameters

💾 Save the Pipeline

After pasting the complete pipeline script into the Jenkins editor:

  1. Scroll down and click Save.

This stores your pipeline configuration under the job name (e.g., Blue-Green-CICD).

▶️ Run the Pipeline Using Build Parameters

Now click Build with Parameters on the left sidebar.

Before the build starts, you’ll be presented with:

DEPLOY_ENV → Choose either blue or green
DOCKER_TAG → Select the Docker image tag (same as environment)
SWITCH_TRAFFIC → Check this to switch live traffic to the selected environment

Click Build to start the CI/CD workflow.

🔁 Example Use Case

● On the first deployment, choose:

DEPLOY_ENV = blue
DOCKER_TAG = blue
SWITCH_TRAFFIC = false

Accessing the Application via AWS Load Balancer

Once your Jenkins pipeline has successfully deployed the application to the EKS cluster, it will be exposed using a Kubernetes Service of type LoadBalancer. This automatically creates an AWS Load Balancer with a public DNS endpoint.

🧭 How to Get the Load Balancer URL

To find the DNS name of the Load Balancer from your terminal, run:

kubectl get all -n webapps

or more specifically:

kubectl get svc -n webapps

Look for the service named bankapp-service (or whatever you named it), and check the EXTERNAL-IP or LoadBalancer Ingress column. You’ll see an AWS DNS URL like:

ab3346b97430849d2a587cb8b2c8c638-1815145407.us-east-1.elb.amazonaws.com

🌐 Open in Browser

Take the DNS URL from the terminal and paste it into your web browser:

http://ab3346b97430849d2a587cb8b2c8c638-1815145407.us-east-1.elb.amazonaws.com

You should now see your application dashboard live in the browser, served from your selected deployment environment (blue or green)

On the second deployment (to green), choose:

DEPLOY_ENV = green
DOCKER_TAG = green
SWITCH_TRAFFIC = true

This will:

● Deploy the new version to the green environment
● Then switch traffic from blue to green

✅ From now onward, all deployments can be managed dynamically using these pipeline parameters, allowing you to alternate between blue and green environments and maintain zero-downtime releases.

Now we will check the nexus artifact where our build repository is stored or not. We will login nexus server and will click on browse. As per nexus configuration we will check the snapshots option where we should see the build artifacts.

❓ Why Use Blue-Green Deployment in This Project?

We use Blue-Green Deployment to ensure zero-downtime releases and enable safe rollbacks. By maintaining two identical environments (Blue and Green), we can:

● Deploy new changes to the inactive environment (e.g., Green)
● Test thoroughly without affecting live users
● Instantly switch traffic once verified

● Roll back easily by reverting traffic if an issue occurs

This approach provides high availability, minimized risk, and seamless updates, making it ideal for production deployments in Kubernetes.

Verify Nexus Repository for Build Artifacts

After a successful pipeline run, Maven should have deployed the generated .jar or .war files to your configured Nexus repository (usually a snapshot or release repo). Let’s verify that the artifacts are stored correctly.

**Login to Nexus

** Open your Nexus server in a browser:

http://<your-nexus-server-ip>:8081

Login using your admin credentials (or any authorized user).

Navigate to "Browse"

On the left sidebar, click:

Browse → Repositories

Select the Correct Repository

Based on your Maven configuration (from settings.xml), click on:

maven-snapshots

Verify Artifact Path

Navigate through the folders by your groupId / artifactId / version path. For example, if your project uses:

🏁 Conclusion

In this blog, we walked through the complete setup of a production-grade Blue-Green CI/CD pipeline from scratch using industry-standard tools like Jenkins, Docker, SonarQube, Nexus, and Amazon EKS (Kubernetes).

We covered:

  1. Provisioning cloud infrastructure using Terraform

  2. Installing and integrating essential DevOps tools

  3. Building a flexible Jenkins pipeline with build parameters

  4. Performing code quality checks with SonarQube

  5. Ensuring security using Trivy scans

  6. Managing artifacts through Nexus Repository Manager

  7. Executing Blue-Green Deployments with manual traffic switching

  8. Verifying deployments through **AWS Load Balancer
    **

This pipeline not only enables zero-downtime deployments but also enforces code quality, artifact management, and security scanning—all essential elements of a modern DevOps lifecycle.

Whether you're deploying microservices or monoliths, this approach gives your team confidence in releases, rollback capabilities, and continuous delivery at scale.

0
Subscribe to my newsletter

Read articles from Subroto Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Subroto Sharma
Subroto Sharma

I'm a passionate and results-driven DevOps Engineer with hands-on experience in automating infrastructure, optimizing CI/CD pipelines, and enhancing software delivery through modern DevOps and DevSecOps practices. My expertise lies in bridging the gap between development and operations to streamline workflows, increase deployment velocity, and ensure application security at every stage of the software lifecycle. I specialize in containerization with Docker and Kubernetes, infrastructure-as-code using Terraform, and managing scalable cloud environments—primarily on AWS. I’ve worked extensively with tools like Jenkins, GitHub Actions, SonarQube, Trivy, and various monitoring/logging stacks to build secure, efficient, and resilient systems. Driven by automation and a continuous improvement mindset, I aim to deliver value faster and more reliably by integrating cutting-edge tools and practices into development pipelines.