Capstone DevOps Project: Enterprise-Grade CI/CD Pipeline with Kubernetes on AWS, Jenkins, Helm, Ingress, and Monitoring

Table of contents

Introduction ๐Ÿš€

In todayโ€™s fast-paced software development world, automating the process of building, testing, and deploying applications is essential for delivering features quickly and reliably. Thatโ€™s where CI/CD pipelines come into play.

In this project, Iโ€™ve built a complete, production-grade CI/CD pipeline using Jenkins, Kubernetes (EKS), and various open-source DevOps tools โ€” all deployed on AWS. This setup not only automates the deployment of applications but also integrates monitoring (with Prometheus & Grafana) and secure ingress access with HTTPS using Nginx Ingress Controller and Cert-Manager.

Whether you're a DevOps beginner looking to understand real-world pipelines or a professional aiming to implement enterprise-grade CI/CD systems, this blog will walk you through every step โ€” from infrastructure setup to continuous delivery and monitoring.

By the end of this guide, you'll have a fully functional, cloud-native CI/CD pipeline with all the essential components configured, deployed, and running on Kubernetes.

Architecture Diagram of the Project

๐Ÿ“Œ Source Code & Project Repositories

To keep things simple and organized, Iโ€™ve divided the entire project into three separate repositories, each focusing on a different part of the DevSecOps workflow:

๐Ÿ”ง Project Repository (CI Repo)


๐Ÿš€ CD Repository


โ˜๏ธ Infrastructure as Code (IaC) โ€” Terraform for EKS


๐Ÿ”’ Configure AWS Security Group

A Security Group in AWS acts like a virtual firewall. It controls what kind of traffic can come into or go out of your EC2 instances or servicesโ€”keeping your infrastructure secure.

For this project, we'll either create a new security group or update an existing one with the required rules.

๐Ÿ“Œ Essential Security Group Rules for Kubernetes Cluster

Port(s)PurposeWhy Itโ€™s Needed
587SMTP (Email Notifications)To allow tools like Jenkins to send email notifications
22SSH AccessFor secure shell access to EC2 instances (use with caution)
80 and 443HTTP & HTTPSFor serving web traffic (Ingress, Jenkins, ArgoCD UI, etc.)
3000 - 11000App-Specific PortsFor apps like Grafana (3000), Prometheus, and others

โœ… Best Practices for Security Group Setup

  • ๐Ÿ” Follow Least Privilege
    Only open the ports that your application actually needs. Avoid exposing everything โ€œjust in case.โ€

  • ๐Ÿ›‘ Restrict SSH Access (Port 22)
    Limit SSH access to your IP or admin IPs only. Never leave it open to the entire internet (0.0.0.0/0)โ€”this is a big security risk. (I have done this for demo purpose only).


Create EC2 Instances for Required Tools

To run essential DevOps tools like Nexus, SonarQube, Jenkins, and manage infrastructure, you'll need to create four separate EC2 instances on AWS.

๐Ÿ“‹ What Youโ€™ll Be Creating:

Instance NamePurpose
NexusArtifact repository to store JAR files, and other build artifacts
SonarQubeStatic code analysis and code quality scanning
JenkinsCI/CD automation server for building, testing, and triggering deployments
InfraServerUsed to provision the EKS cluster and manage infrastructure via Terraform

๐Ÿ”ง Step 1: Launch EC2 Instances

  1. Go to the AWS EC2 Dashboard

  2. Click โ€œLaunch Instanceโ€

โš™๏ธ Step 2: Configure Instance

  1. Set the number of instances to 4

  2. AMI (Amazon Machine Image): Select the latest Ubuntu (e.g., Ubuntu 22.04 LTS)

  3. Instance Type: Choose t2.large (2 vCPU, 8 GB RAM)

  4. Key Pair: Select an existing key pair or create a new one to access your instances via SSH

  5. Storage: Set root volume to at least 25 GB

  6. Security Group: Use the security group you configured earlier (with necessary ports open)

  7. Click Launch Instance

  8. Tags: Add a Name tag to each instance to identify them easily

InstanceName Tag
1Nexus
2SonarQube
3Jenkins
4InfraServer


๐Ÿ”— Connecting to EC2 Instances via SSH

Once your EC2 instances are up and running, you can connect to them securely using SSH (Secure Shell) from your local terminal.

๐Ÿงฉ What You Need:

  • The .pem file (private key) you downloaded or created while launching the EC2 instances

  • The public IP address of each EC2 instance (youโ€™ll find it in the EC2 dashboard)

๐Ÿ’ป SSH Command

ssh -i <path-to-pem-file> ubuntu@<public-ip>
  • Replace <path-to-pem-file> with the path to your .pem file

  • Replace <public-ip> with the public IP of the instance you want to access

Repeat this for each instance:

  • Nexus

  • SonarQube

  • Jenkins

  • InfraServer

ssh -i <path-to-pem-file> ubuntu@<public-ip-of-SonarQube>
ssh -i <path-to-pem-file> ubuntu@<public-ip-of-Nexus>
ssh -i <path-to-pem-file> ubuntu@<public-ip-of-Jenkins>
ssh -i <path-to-pem-file> ubuntu@<public-ip-of-InfraServer>

Configure each server

To ensure your server is up to date, run the following command:

sudo apt update

This will refresh the package list and update any outdated software.

Configure the Infrastructure Server

Now, we need to make sure that the server has the necessary permissions to create resources on AWS.

  1. Create an IAM Role in AWS:

    • Go to the AWS Management Console.

    • In the navigation bar, search for IAM and select Roles.

    • Click on Create role.

  2. Set the Trusted Entity:

    • Trusted Entity Type: Select AWS service.

    • Use Case: Select EC2 (this allows EC2 instances to assume this role).

  1. Attach Policies:

    • Click Next: Permissions.

    • In the search bar, search for AdministratorAccess.

    • Check the box next to AdministratorAccess to give the EC2 instance full permissions.

  1. Assign a Role Name:

    • Choose a role name

    • Click Create role to finish creating the IAM role.

Attach the IAM Role to the EC2 Instance

Now that your IAM role is created, itโ€™s time to attach it to your EC2 instance.

  1. Go to the EC2 Dashboard in AWS.

  2. Find the InfraServer instance and click on it to open the instance details.

  3. Click Actions โ†’ Security โ†’ Modify IAM Role.

  4. Under the IAM role section, select the role you just created.

  5. Click Update IAM role to apply the changes.

InfraServer now has the necessary permissions to create AWS resources. ๐Ÿš€

Install AWS CLI on the Infra Server

To manage AWS resources from your server, you need to install the AWS Command Line Interface (CLI).

  1. Download AWS CLI: Run the following command to download the AWS CLI installation package:

     curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    
  2. Install unzip (if not already installed):

     sudo apt install unzip -y
    
  3. Unzip the AWS CLI package:

     unzip awscliv2.zip
    
  4. Install AWS CLI:

     sudo ./aws/install
    

Verify AWS CLI Installation

To ensure AWS CLI is installed correctly, run:

aws --version

This should display the installed version of AWS CLI.

Configure AWS CLI

Set the AWS region globally so that the AWS CLI knows where to create resources. Use the following command:

aws configure set region us-east-1

Replace us-east-1 with your preferred AWS region if needed.

Install Terraform

To use Terraform, follow these steps to install it on your InfraServer:

  1. Update the server and install required dependencies:

     sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
    
  2. Download the HashiCorp GPG key:

     wget -O- https://apt.releases.hashicorp.com/gpg | \
     gpg --dearmor | \
     sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
    
  3. Verify the GPG key:

     gpg --no-default-keyring \
     --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
     --fingerprint
    
  4. Add the HashiCorp repository:

     echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
     https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
     sudo tee /etc/apt/sources.list.d/hashicorp.list
    
  5. Update the package list:

     sudo apt update
    
  6. Install Terraform:

     sudo apt-get install terraform
    

Verify Terraform Installation

To confirm that Terraform is installed, run:

terraform -version

This will display the installed version of Terraform.

Clone the Infrastructure as Code (IaC) Repository

Clone GitHub repository to the InfraServer where the Terraform configuration files are stored.

  1. Clone the repository:

     git clone https://github.com/praduman8435/EKS-Terraform.git
    

  2. Navigate into the repository directory:

     cd EKS-Terraform
    

Create Resources on AWS Using Terraform

  1. Initialize Terraform:

    Before applying the configuration, you need to initialize the Terraform working directory:

     terraform init
    

  2. Check the Resources Terraform Will Create:

    Run the following command to see a preview of the resources Terraform will create:

     terraform plan
    

  3. Apply the Terraform Configuration:

    Once youโ€™re ready to create the resources, apply the configuration:

     terraform apply --auto-approve
    

    This command will automatically approve the changes without prompting for confirmation.

Now, sit back and relax for approximately 10 minutes as Terraform creates the resources in AWS. You can monitor the progress in the terminal.


Set Up the Jenkins Server

Now infrastructure is ready, let's set up Jenkins server. Jenkins will be the core tool for automating our CI/CD pipeline.

Step 1: Install Java

Jenkins requires Java to run. Weโ€™ll install OpenJDK 17 (a stable, widely-used version):

sudo apt install openjdk-17-jre-headless -y

Step 2: Install Jenkins

  1. Add the Jenkins repository key:

     sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
     https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
    
  2. Add the Jenkins repository to your system:

     echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
     https://pkg.jenkins.io/debian-stable binary/" | sudo tee \
     /etc/apt/sources.list.d/jenkins.list > /dev/null
    
  3. Update your package list:

     sudo apt-get update
    
  4. Install Jenkins:

     sudo apt-get install jenkins -y
    

Step 3: Access Jenkins Web UI

  1. Get the Jenkins Serverโ€™s Public IP from the AWS EC2 dashboard.

  2. In your browser, go to:

     http://<public-ip-of-your-Jenkins-server>:8080
    

  3. Important: Make sure port 8080 is open in the security group attached to your Jenkins EC2 instance.

    • Go to EC2 โ†’ Security Groups โ†’ Edit inbound rules.

    • Add a rule to allow TCP traffic on port 8080 from your IP or anywhere (0.0.0.0/0) for testing.

Unlock Jenkins

  1. On the Jenkins setup page, it will ask for the initial admin password.

  2. Run this command on your server to get it:

     sudo cat /var/lib/jenkins/secrets/initialAdminPassword
    

  3. Copy the password and paste it into the browser.

Step 5: Install Plugins & Create Admin User

  1. Click "Install Suggested Plugins" when prompted.

  2. Once the plugins are installed, create your admin user (username, password, email).

  3. Click "Save and Continue".

  4. Then click "Save and Finish".

๐ŸŽ‰ Jenkins is Ready!

Youโ€™ve successfully installed and configured Jenkins! You can now start creating jobs and automating your CI/CD pipeline.


Set Up the SonarQube Server

SonarQube is a powerful tool for continuously inspecting code quality and security. In this step, weโ€™ll install Docker and run SonarQube as a container on our server.

Step 1: Update the Server

Letโ€™s start by updating the system packages:

sudo apt update

Step 2: Install Docker on the Server

To run SonarQube as a container, we first need Docker installed.

  1. Install required dependencies:

     sudo apt-get install ca-certificates curl -y
    
  2. Add Dockerโ€™s official GPG key:

     sudo install -m 0755 -d /etc/apt/keyrings
     sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
     sudo chmod a+r /etc/apt/keyrings/docker.asc
    
  3. Add the Docker repository:

     echo \
     "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
     https://download.docker.com/linux/ubuntu \
     $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
     sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  4. Update your package index again:

     sudo apt-get update
    

  5. Install Docker Engine and tools:

     sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
    

To run Docker without sudo every time:

sudo usermod -aG docker $USER

โš ๏ธ Important: You need to log out and log back in after running this command for the changes to take effect.

Step 4: Run SonarQube in a Docker Container

Now that Docker is ready, letโ€™s launch SonarQube:

docker run -d --name sonarqube -p 9000:9000 sonarqube:lts

This command will:

  • Download the latest LTS (Long-Term Support) version of SonarQube.

  • Start it in a detached container.

  • Expose it on port 9000.

Step 5: Access SonarQube Web UI

Open your browser and go to:

http://<public-ip-of-sonarqube>:9000

Make sure port 9000 is allowed in the EC2 security group.

Step 6: Login and Change Default Password

Use the default credentials to log in:

  • Username: admin

  • Password: admin

Youโ€™ll be prompted to change the password on your first login.

๐ŸŽ‰ SonarQube is now up and running on your server!


Set Up the Nexus Server

Nexus is a repository manager where we can store and manage build artifacts like Docker images, Maven packages, and more. Weโ€™ll install and run Nexus in a Docker container.

Step 1: Update the System

Start by updating all the packages:

sudo apt update

Step 2: Install Docker

If Docker isnโ€™t already installed on this server, follow these steps:

  1. Install required packages:

     sudo apt-get install ca-certificates curl -y
    
  2. Add Dockerโ€™s GPG key:

     sudo install -m 0755 -d /etc/apt/keyrings
     sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
     sudo chmod a+r /etc/apt/keyrings/docker.asc
    
  3. Add the Docker repository:

     echo \
     "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
     https://download.docker.com/linux/ubuntu \
     $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
     sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  4. Update package index:

     sudo apt-get update
    

  5. Install Docker:

     sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
    

Step 3: Run Docker Without sudo (Optional)

To avoid typing sudo before every Docker command:

sudo usermod -aG docker $USER

๐Ÿ” Log out and log back in to apply this change.

Step 4: Run Nexus in a Docker Container

Now that Docker is ready, letโ€™s launch the Nexus container:

docker run -d --name nexus -p 8081:8081 sonatype/nexus3:latest

  • This runs Nexus in the background.

  • The web interface will be available on port 8081.

Step 5: Access Nexus Web Interface

  1. In your browser, go to:

     http://<public-ip-of-nexus>:8081
    

  2. Make sure port 8081 is allowed in your EC2 instanceโ€™s Security Group.

Step 6: Retrieve the Admin Password

To sign in, you need the initial admin password which is stored inside the container.

Hereโ€™s how to get it:

  1. Find the container ID:

     docker ps
    
  2. Access the container shell:

     docker exec -it <container-id> /bin/bash
    
  3. Print the password:

     cat /nexus-data/admin.password
    
  4. Copy the password and go back to the Nexus UI.

Step 7: Login & Set a New Password

  • Username: admin

  • Password: (paste the password you retrieved)

After login, it will ask you to set a new admin password.

๐ŸŽ‰ Thatโ€™s it! Your Nexus repository manager is now ready to use.


Configure Jenkins Plugins and Docker

Now that Jenkins is up and running, letโ€™s install the required plugins and set up Docker on the Jenkins server to enable full CI/CD functionality.

Step 1: Install Required Jenkins Plugins

  1. Go to Jenkins Dashboard
    โ†’ Click Manage Jenkins
    โ†’ Click Manage Plugins
    โ†’ Go to the Available tab

  2. Search and install the following plugins (you can select multiple at once):

    • Pipeline Stage View

    • Docker Pipeline

    • SonarQube Scanner

    • Config File Provider

    • Maven Integration

    • Pipeline Maven Integration

    • Kubernetes

    • Kubernetes CLI

    • Kubernetes Client API

    • Kubernetes Credentials

    • Kubernetes Credentials Provider

  1. Click Install without restart and wait for all plugins to be installed.

  2. Once done, restart Jenkins to apply all changes

Step 2: Install Docker on Jenkins Server

Weโ€™ll now install Docker so Jenkins jobs can build Docker images directly.

  1. Update and install required packages:

     sudo apt-get update
     sudo apt-get install ca-certificates curl -y
    
  2. Add Docker's GPG key:

     sudo install -m 0755 -d /etc/apt/keyrings
     sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
     sudo chmod a+r /etc/apt/keyrings/docker.asc
    
  3. Add the Docker repository:

     echo \
     "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
     https://download.docker.com/linux/ubuntu \
     $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
     sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    

  4. Install Docker:

     sudo apt-get update
     sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
    

Step 3: Allow Jenkins User to Run Docker

To allow Jenkins to run Docker without needing sudo every time:

sudo usermod -aG docker $USER

๐Ÿ” Log out and log back in (or reboot) for the group change to take effect.

Step 4: Configure Maven and Sonar Scanner in Jenkins

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Global Tool Configuration

  2. Scroll down to the Maven section:

    • Click Add Maven

    • Name it: maven3

    • Choose โ€œInstall automaticallyโ€ (Jenkins will download it)

  3. Scroll to the SonarQube Scanner section:

    • Click Add SonarQube Scanner

    • Name it: sonar-scanner

    • Enable โ€œInstall automaticallyโ€

  4. Click Save or Apply to finish.

๐ŸŽ‰ Done! Jenkins is now fully equipped with all the tools you need to build, analyze, and deploy your applications in a modern DevOps workflow.


Create and Configure Jenkins Pipeline

Step 1: Create a New Pipeline

  1. Go to Jenkins Dashboard

  2. Click New Item

  3. Enter a name

  4. Choose Pipeline as the item type

  5. Click OK

  6. Under Build Discarder:

    • Check Discard Old Builds

    • Set Max # of builds to keep = 3
      (Keeps Jenkins light and fast)

Step 2: Install Trivy on Jenkins Server

Trivy is used for container vulnerability scanning.

Run the following commands on your Jenkins server:

sudo apt-get install wget apt-transport-https gnupg lsb-release -y

wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | \
gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null

echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] \
https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | \
sudo tee -a /etc/apt/sources.list.d/trivy.list

sudo apt-get update
sudo apt-get install trivy -y

Check if Trivy is working:

trivy --version

Step 3: Add SonarQube Credentials in Jenkins

  1. Go to SonarQube UI โ†’ Click on Administration โ†’ Security โ†’ Users

  2. Generate a new token

Now, add the token in Jenkins:

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Credentials

  2. Click on (global) โ†’ Add Credentials

  3. Fill the form:

    • Kind: Secret Text

    • Secret: (Paste the token copied from SonarQube)

    • ID: sonar-token (We'll refer to this in the pipeline)

  1. Click Create

Step 4: Configure SonarQube Server in Jenkins

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Configure System

  2. Scroll to SonarQube servers section

  3. Click Add SonarQube

  4. Fill the details:

    • Name: sonar

    • Server Authentication Token: Choose sonar-token

    • Server URL: http://<public-ip-of-sonarqube>:9000

  1. Click Save

Write Your Pipeline Script

Youโ€™re now ready to write your pipeline script under Pipeline โ†’ Pipeline Script section of the job.

pipeline {
    agent any

    tools{
        maven 'maven3'
    }
    environment {
        SCANNER_HOME= tool 'sonar-scanner'
    }

    stages {
        stage('Git Checkout') {
            steps {
                git branch: 'main', url: 'https://github.com/praduman8435/Capstone-Mega-DevOps-Project.git'
            }
        }
        stage('Compilation') {
            steps {
                sh 'mvn compile'
            }
        }
        stage('Testing') {
            steps {
                sh 'mvn test -DskipTests=true'
            }
        }
        stage('Trivy FS Scan') {
            steps {
                sh 'trivy fs --format table -o fs-report.html .'
            }
        }
        stage('Code Quality Analysis') {
            steps {
                withSonarQubeEnv('sonar') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar-projectName=GCBank -Dsonar.projectKey=GCBank \
                            -Dsonar.java.binaries=target '''
                }
            }
        }
    }
}

Now, try to build pipeline and check till now everything works fine

Here, everything works as expected now move forward and add some more scripts inside pipeline

Implement Quality Gate Check

In SonarQube:

  1. Go to: Administration โ†’ Configuration โ†’ Webhooks

  2. Create New Webhook

    • Name: sonarqube-webhook

    • URL: http://<jenkins-public-ip>:8080/sonarqube-webhook/

  • Click Create

This webhook will notify Jenkins once the SonarQube analysis is complete and the Quality Gate status is available.

Your Jenkins server should be publicly accessible (or at least reachable by the SonarQube server) on that webhook URL.

Update pom.xml with Nexus Repositories

In Nexus UI:

  • Browse โ†’ Select maven-releases and maven-snapshots

  • Copy both URLs (youโ€™ll use them in pom.xml)

In your pom.xml: Search for <distributionManagement> block and update it like this:

<distributionManagement>
    <repository>
        <id>maven-releases</id>
        <url>http://<nexus-ip>:8081/repository/maven-releases/</url>
    </repository>
    <snapshotRepository>
        <id>maven-snapshots</id>
        <url>http://<nexus-ip>:8081/repository/maven-snapshots/</url>
    </snapshotRepository>
</distributionManagement>

๐Ÿ’ก Donโ€™t forget to:

  • Replace <nexus-ip> with your actual Nexus IP

  • Commit and push the change to GitHub

Configure Nexus Credentials in Jenkins via settings.xml

In Jenkins:

  • Go to: Manage Jenkins โ†’ Managed Files โ†’ Add a new Config File

  • Type: Global Maven settings.xml

  • ID: Capstone

  • click on next

  • Now, Inside Content generated find the <servers> section

๐Ÿ“„ In the <servers> section, add:

<servers>
  <server>
    <id>maven-releases</id>
    <username>admin</username>
    <password>heyitsme</password>
  </server>

  <server>
    <id>maven-snapshots</id>
    <username>admin</username>
    <password>heyitsme</password>
  </server>
</servers>

  • Submit the changes

๐Ÿ” This ensures your Maven builds can authenticate with Nexus to deploy artifacts.

Add DockerHub Credentials in Jenkins

  • Go to: Manage Jenkins โ†’ Credentials โ†’ Global โ†’ Add Credentials

  • Kind: Username and Password

  • ID: docker-cred

  • Username: Your DockerHub username

  • Password: Your DockerHub password

๐Ÿ’ก Youโ€™ll reference this ID in your pipeline when logging in to DockerHub.

Add Jenkins User to Docker Group

Run this on Jenkins server:

sudo usermod -aG docker jenkins

Then restart the server or run:

sudo systemctl restart jenkins

So Jenkins can access Docker without sudo

Update the pipeline Script

pipeline {
    agent any

    tools{
        maven 'maven3'
    }
    environment {
        SCANNER_HOME= tool 'sonar-scanner'
        IMAGE_TAG= "v${BUILD_NUMBER}"
    }

    stages {
        stage('Git Checkout') {
            steps {
                git branch: 'main', url: 'https://github.com/praduman8435/Capstone-Mega-DevOps-Project.git'
            }
        }
        stage('Compilation') {
            steps {
                sh 'mvn compile'
            }
        }
        stage('Testing') {
            steps {
                sh 'mvn test -DskipTests=true'
            }
        }
        stage('Trivy FS Scan') {
            steps {
                sh 'trivy fs --format table -o fs-report.html .'
            }
        }
        stage('Code Quality Analysis') {
            steps {
                withSonarQubeEnv('sonar') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar-projectName=GCBank -Dsonar.projectKey=GCBank \
                            -Dsonar.java.binaries=target '''
                }
            }
        }
        stage('Quality Gate Check'){
            steps{
                waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
            }
        }
        stage('Build the Application'){
            steps{
                sh 'mvn package -DskipTests'
            }
        }
        stage('Push Artifacts to Nexus'){
            steps{
                withMaven(globalMavenSettingsConfig: 'Capstone', jdk: '', maven: 'maven3', mavenSettingsConfig: '', traceability: true) {
                    sh 'mvn clean deploy -DskipTests'
                }
            }
        }
        stage('Build & Tag Docker Image'){
            steps{
                script{
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh 'docker build -t thepraduman/bankapp:$IMAGE_TAG .'
                    }
                }
            }
        }
        stage('Docker Image Scan') {
            steps {
                sh 'trivy image --format table -o image-report.html thepraduman/bankapp:$IMAGE_TAG'
            }
        }
        stage('Push Docker Image') {
            steps {
                script{
                    withDockerRegistry(credentialsId: 'docker-cred') {
                        sh 'docker push thepraduman/bankapp:$IMAGE_TAG'
                    }
                }
            }
        }
    }
}

Click on Build Now To check the pipeline successfully triggered or not

Automating Jenkins Pipeline Trigger with GitHub Webhook

Now that the CI pipeline is ready, letโ€™s automate it so that it runs automatically every time new code is pushed to the GitHub repository. Weโ€™ll use the Generic Webhook Trigger plugin in Jenkins for this.

Install Generic Webhook Trigger Plugin

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Plugins

  2. Under the Available plugins tab, search for Generic Webhook Trigger

  3. Select it and click on Install

  4. Restart Jenkins once the installation is complete

Configure Webhook Trigger in Your Pipeline

  1. Go back to the Jenkins dashboard and open your pipeline job ( capstone_CI)

  2. Click on Configure

  3. Scroll down to the Build Triggers section and check the box for Generic Webhook Trigger

  4. Under Post content parameters, add:

    • Variable: ref

    • Expression: $.ref

    • Content-Type: JSONPath

  1. Add a token:

    • Token Name: capstone
  2. (Optional) Add a filter to trigger the pipeline only for changes on the main branch:

    • Expression: refs/heads/main

    • Text: $ref

  1. Click Save

Once saved, youโ€™ll see a webhook URL under the token section, something like:

http://<your-jenkins-ip>:8080/generic-webhook-trigger/invoke?token=capstone

Configure GitHub Webhook

  1. Go to your GitHub repository (the one used in the pipeline)

  2. Click on Settings โ†’ Webhooks โ†’ Add Webhook

  3. Fill out the form as follows:

    • Payload URL: Paste the webhook URL from Jenkins

    • Content Type: application/json

    • Leave the secret field blank (or add one and configure Jenkins accordingly)

    • Choose Just the push event

  1. Click on Add Webhook

Thatโ€™s it! ๐ŸŽ‰ Now, every time a new commit is pushed to the main branch, Jenkins will automatically trigger the pipeline.

Setting Up CD Pipeline

With our CI pipeline automated, itโ€™s time to set up the CD pipeline. The first step is ensuring we can update the Docker image tag in the Kubernetes deployment every time a new image is built by the CI pipeline.

Weโ€™ll start by granting Jenkins access to CD GitHub repository and setting up email notifications so that you receive updates when your pipeline fails or succeeds.

Add GitHub Credentials to Jenkins

This will allow Jenkins to clone or push changes to your GitHub CD repository.

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Credentials

  2. Click on (global)

  3. Click on Add Credentials

  4. Fill in the following:

    • Kind: Username with password

    • Scope: Global

    • Username: Your GitHub username

    • Password: Your GitHub password or personal access token (recommended)

    • ID: github-cred

  1. Click on Create

Configure Email Notifications in Jenkins

Generate Gmail App Password

To securely send emails from Jenkins, weโ€™ll use a Gmail App Password instead of your actual Gmail password.

  1. Log in to your Google account

  2. Navigate to Security

  3. Enable 2-Step Verification if itโ€™s not already enabled

  4. Scroll down to App Passwords

  5. Generate a new app password:

    • App name: capstone

  • Copy the generated token (youโ€™ll need it in the next step)

Add Gmail Credentials to Jenkins

  1. Go to Jenkins Dashboard โ†’ Manage Jenkins โ†’ Credentials

  2. Click on (global) and then Add Credentials

  3. Fill in the following:

    • Kind: Username with password

    • Scope: Global

    • Username: Your Gmail address

    • Password: The generated Gmail app password

    • ID: mail-cred

  1. Click Create

Configure Jenkins Mail Server

  1. Go to Manage Jenkins โ†’ Configure System

  2. Scroll to Extended E-mail Notification and fill out:

    • SMTP Server: smtp.gmail.com

    • SMTP Port: 465

    • Credentials: Select mail-cred

    • Check Use SSL

  1. Scroll to E-mail Notification section:

    • SMTP Server: smtp.gmail.com

    • Use SMTP Authentication: โœ…

    • Username: Your Gmail address

    • Password: Your Gmail App Password (recently generated)

    • SMTP Port: 465

    • Use SSL: โœ…

  1. Click Save

๐Ÿ›ก๏ธ Make sure ports 465 and 587 are open in the Jenkins serverโ€™s security group to allow email traffic.

โœ… Test the Email Setup

  1. In the Extended E-mail Notification section, click on Test configuration by sending a test e-mail

  2. Enter a recipient email address

  3. Click Test Configuration

If everything is set up correctly, you should receive an email confirming that the Jenkins email notification system is working! ๐ŸŽ‰

Add the Email notification script in the CI pipeline

Add this inside the pipeline but outside the stages block

post {
    always {
        script {
            def jobName = env.JOB_NAME
            def buildNumber = env.BUILD_NUMBER
            def pipelineStatus = currentBuild.result ?: 'UNKNOWN'
            def bannerColor = pipelineStatus.toUpperCase() == 'SUCCESS' ? 'green' : 'red'

            def body = """
                <html>
                    <body>
                        <div style="border: 4px solid ${bannerColor}; padding: 10px;">
                            <h2>${jobName} - Build #${buildNumber}</h2>
                            <div style="background-color: ${bannerColor}; padding: 10px;">
                                <h3 style="color: white;">Pipeline Status: ${pipelineStatus.toUpperCase()}</h3>
                            </div>
                            <p>Check the <a href="${env.BUILD_URL}">Console Output</a> for more details.</p>
                        </div>
                    </body>
                </html>
            """

            emailext(
                subject: "${jobName} - Build #${buildNumber} - ${pipelineStatus.toUpperCase()}",
                body: body,
                to: 'praduman.cnd@gmail.com',
                from: 'praduman.8435@gmail.com',
                replyTo: 'praduman.8435@gmail.com',
                mimeType: 'text/html',
                attachmentsPattern: 'fs-report.html'
            )
        }
    }
}

Now, The email successfully configured and sets up

๐Ÿš€ Configure the InfraServer

Install Kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

kubectl version --client

Update the Kubeconfig File

Connect your Jenkins server to the EKS cluster:

aws eks update-kubeconfig \
  --region us-east-1 \
  --name capstone-cluster

Install eksctl

eksctl is a CLI tool that simplifies EKS cluster operations.

curl -sLO "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz"
tar -xzf eksctl_$(uname -s)_amd64.tar.gz
sudo mv eksctl /usr/local/bin

# Verify the installation
eksctl version

Install Helm

Helm is used for managing Kubernetes applications using Helm charts.

sudo apt update && sudo apt upgrade -y
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Associate IAM OIDC Provider with the Cluster

This step is needed to create service accounts with IAM roles.

eksctl utils associate-iam-oidc-provider \
  --cluster capstone-cluster \
  --region us-east-1 \
  --approve

Create IAM Service Account for EBS CSI Driver

This enables your cluster to dynamically provision EBS volumes.

eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster capstone-cluster \
  --region us-east-1 \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --override-existing-serviceaccounts

Deploy EBS CSI Driver

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.30"

Install NGINX Ingress Controller

This is required for routing external traffic to your services inside Kubernetes:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

Install Cert-Manager (for TLS Certificates)

Cert-manager helps you manage SSL certificates inside Kubernetes:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml

๐Ÿ” Configure RBAC (Role-Based Access Control)

To manage access control and permissions properly in your Kubernetes cluster, weโ€™ll start by creating a dedicated namespace and then apply RBAC policies.

  • Create a namespace with name webapps

      kubectl create ns webapps
    

  • Create a Service Account

      vim service-account.yaml
    

    add the yaml file & save it

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: jenkins
        namespace: webapps
    
      kubectl apply -f service-account.yaml
    
  • Create a Role

      vim role.yaml
    

    add the yaml file & save it

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: jenkins-role
        namespace: webapps
      rules:
        - apiGroups:
              - ""
              - apps
              - networking.k8s.io
              - autoscaling
          resources:
            - secrets
            - configmaps
            - persistentvolumeclaims
            - services
            - pods
            - deployments
            - replicasets
            - ingresses
            - horizontalpodautoscalers
          verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    
      kubectl apply -f role.yaml
    
  • Bind the role to service account

      vim role-binding.yaml
    

    add the yaml file & save it

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: jenkins-rolebinding
        namespace: webapps 
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: jenkins-role 
      subjects:
      - namespace: webapps 
        kind: ServiceAccount
        name: jenkins
    
      kubectl apply -f role-binding.yaml
    
  • Create Cluster role

      vim cluster-role.yaml
    

    add the yaml file & save it

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: jenkins-cluster-role
      rules:
      - apiGroups: [""]
        resources: 
           - persistentvolumes
        verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["storage.k8s.io"]
        resources: 
           - storageclasses
        verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["cert-manager.io"]
        resources: 
           - clusterissuers
        verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    
      kubectl apply -f cluster-role.yaml
    
  • Bind cluster role to Service Account

      vim cluster-role-binding.yaml
    

    add the yaml file & save it

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: jenkins-cluster-rolebinding
      subjects:
      - kind: ServiceAccount
        name: jenkins
        namespace: webapps
      roleRef:
        kind: ClusterRole
        name: jenkins-cluster-role
        apiGroup: rbac.authorization.k8s.io
    
      kubectl apply -f cluster-role-binding.yaml
    

Grant Jenkins Access to Kubernetes for Deployments

To allow Jenkins to deploy applications to your EKS cluster, we need to create a service account token and configure it in Jenkins.

Create a Kubernetes Token for Jenkins

Create a secret token that Jenkins will use to authenticate with your Kubernetes cluster.

Run the following command to create and open a token manifest:

vim token.yaml

Paste the following YAML content into the file:

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: jenkins-secret
  annotations:
    kubernetes.io/service-account.name: jenkins

Apply the secret to the webapps namespace:

kubectl apply -f token.yaml -n webapps

Now, retrieve the token using:

kubectl describe secret jenkins-secret -n webapps | grep token

Copy the generated token.

Add Kubernetes Token to Jenkins

  1. Go to your Jenkins Dashboard โ†’ Manage Jenkins โ†’ Credentials.

  2. Select the (global) domain and click Add Credentials.

  3. Fill in the fields as follows:

    • Kind: Secret text

    • Secret: Paste the copied Kubernetes token

    • ID: k8s-cred

  1. Click Create.

Install kubectl on Jenkins Server

To let Jenkins execute Kubernetes commands, install kubectl on the Jenkins machine:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Verify installation:

kubectl version --client

Setting Up the CD Pipeline in Jenkins

Now that CI is complete, letโ€™s configure the Capstone Continuous Deployment (CD) Pipeline to automatically deploy your application to the Kubernetes cluster.

Step 1: Create a New Pipeline Job

  1. Go to Jenkins Dashboard โ†’ New Item

  2. Provide the following details:

    • Name: capstone_CD

    • Item Type: Select Pipeline

  3. Click OK

Step 2: Configure Build Retention

  1. Check the box "Discard old builds"

  2. Set Max # of builds to keep to 3

    This helps conserve resources by keeping only the latest builds.

Step 3: Add the Deployment Pipeline Script

Scroll down to the Pipeline section and paste the following script:

groovyCopyEditpipeline {
    agent any

    stages {
        stage('Git Checkout') {
            steps {
                git branch: 'main', url: 'https://github.com/praduman8435/Capstone-Mega-CD-Pipeline.git'
            }  
        }

        stage('Deploy to Kubernetes') {
            steps {
                withKubeConfig(
                    credentialsId: 'k8s-cred',
                    clusterName: 'capstone-cluster',
                    namespace: 'webapps',
                    restrictKubeConfigAccess: false,
                    serverUrl: 'https://D133D06C5103AE18A950F2047A8EB7DE.gr7.us-east-1.eks.amazonaws.com'
                ) {
                    sh 'kubectl apply -f kubernetes/Manifest.yaml -n webapps'
                    sh 'kubectl apply -f kubernetes/HPA.yaml'
                    sleep 30
                    sh 'kubectl get pods -n webapps'
                    sh 'kubectl get svc -n webapps'
                }
            }  
        }
    }

    post {
        always {
            echo "Pipeline execution completed."
        }
    }
}

Step 4: Save & Run

  1. Click Save

  2. Click Build Now to trigger the pipeline

If everything is set up correctly, Jenkins will pull the deployment manifests from the CD GitHub repository and deploy your app to the webapps namespace in the EKS cluster.

Verifying Kubernetes Resources & Enabling HTTPS with Custom Domain

After setting up the CI/CD pipeline, itโ€™s time to make sure everything is working perfectly and your application is accessible securely over HTTPS with a custom domain.

Step 1: Verify All Resources in the Cluster

On your Infra Server, run:

kubectl get all -n webapps

You should see all your application pods, services, and other resources running successfully. If everything looks good, proceed to the next step.

Step 2: Create a ClusterIssuer Resource for Letโ€™s Encrypt

We'll use Cert-Manager to automatically provision SSL certificates from Letโ€™s Encrypt.

  1. Create a file called cluster-issuer.yaml:
vim cluster-issuer.yaml
  1. Paste the following configuration:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: praduman.cnd@gmail.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
  1. Apply the ClusterIssuer:
kubectl apply -f cluster-issuer.yaml

Step 3: Create an Ingress Resource for Your Application

  1. Create an Ingress configuration file:
vim ingress.yaml
  1. Paste the following content:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: bankapp-ingress
  namespace: webapps
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - www.capstonebankapp.in
    secretName: bankapp-tls-secret
  rules:
  - host: www.capstonebankapp.in
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: bankapp-service
            port:
              number: 80
  1. Apply the ingress resource:
kubectl apply -f ingress.yaml
  1. Check the status:
kubectl get ing -n webapps

Wait for a few moments, then run the command again. Youโ€™ll notice an external load balancer address (usually an AWS ELB) under the ADDRESS column.

Copy the bankapp ingress loadbalancer address

Step 4: Configure Your Custom Domain on GoDaddy

  1. Log in to your GoDaddy account.

  2. Navigate to My Products โ†’ DNS Settings for your domain (capstonebankapp.in).

  3. Look for a CNAME record with name www and edit it.

  4. In the Value field, paste the Load Balancer URL you got from the Ingress.

  5. Save the changes.

โณ Wait a few minutes for the DNS changes to propagate.

Step 5: Access Your Application

Open your browser and visit:

https://www.capstonebankapp.in/login

If everything was configured correctly, your application will now load securely over HTTPS with a valid Letโ€™s Encrypt certificate.


Setup Monitoring with Prometheus & Grafana on EKS

After deploying your application, it's essential to monitor its health, performance, and resource usage. Letโ€™s integrate Prometheus and Grafana into your Kubernetes cluster using Helm.

Add Prometheus Helm Repo

On your Infra Server, add the official Prometheus Community Helm chart repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Create values.yaml for Custom Configuration

We'll define how Prometheus and Grafana should be deployed, what metrics to scrape, and how to expose the services.

  1. Create a file called values.yaml:
vi values.yaml
  1. Paste the following configuration:
# values.yaml for kube-prometheus-stack

alertmanager:
  enabled: false

prometheus:
  prometheusSpec:
    service:
      type: LoadBalancer
    storageSpec:
      volumeClaimTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
  additionalScrapeConfigs:
    - job_name: node-exporter
      static_configs:
        - targets:
            - node-exporter:9100
    - job_name: kube-state-metrics
      static_configs:
        - targets:
            - kube-state-metrics:8080

grafana:
  enabled: true
  service:
    type: LoadBalancer
  adminUser: admin
  adminPassword: admin123

prometheus-node-exporter:
  service:
    type: LoadBalancer

kube-state-metrics:
  enabled: true
  service:
    type: LoadBalancer

Save and exit the file.

Install Monitoring Stack with Helm

helm upgrade --install monitoring prometheus-community/kube-prometheus-stack -f values.yaml -n monitoring --create-namespace

Patch Services to Use LoadBalancer

(Optional if already configured in values.yaml, but ensures services are exposed)

kubectl patch svc monitoring-kube-prometheus-prometheus -n monitoring -p '{"spec": {"type": "LoadBalancer"}}'
kubectl patch svc monitoring-kube-state-metrics -n monitoring -p '{"spec": {"type": "LoadBalancer"}}'
kubectl patch svc monitoring-prometheus-node-exporter -n monitoring -p '{"spec": {"type": "LoadBalancer"}}'

Check Services & Access Grafana

Get all resources in the monitoring namespace:

kubectl get all -n monitoring
kubectl get svc -n monitoring

Youโ€™ll find External IPs assigned to services like Grafana and Prometheus.

โžค Access Grafana

  • URL: http://<grafana-external-ip>

  • Username: admin

  • Password: admin123

โžค Access Prometheus

  • URL: http://<prometheus-external-ip>:9090

  • Go to Status โ†’ Targets to see whatโ€™s being monitored.

Configure Grafana Dashboard

  1. Open the Grafana dashboard.

  2. Go to Connections โ†’ Data Sources โ†’ Add new.

  3. Search for Prometheus and select it.

  4. In the URL, enter your Prometheus service URL (e.g., http://<prometheus-external-ip>:9090).

  5. Click Save & Test.

๐ŸŽฏ View Dashboards

  • Go to Dashboards โ†’ Browse.

  • Explore default dashboards for Node Exporter, Kubernetes metrics, and more.

Now you have real-time observability into your EKS cluster!


โœ… Conclusion

In this project, I've successfully built an enterprise-grade CI/CD pipeline from scratch using Jenkins, Kubernetes (EKS), Docker, GitHub, and other DevOps tools, all running on AWS.

I automated the entire workflow:

  • From building and pushing Docker images in the CI pipeline

  • To continuously deploying them into a secure, production-ready Kubernetes cluster via CD

  • With proper ingress routing, TLS certificates, and monitoring in place using Prometheus and Grafana

This setup demonstrates how modern DevOps practices can streamline software delivery and infrastructure management. It not only improves deployment speed but also ensures reliability, scalability, and observability of applications.

๐Ÿ’ก Letโ€™s connect and discuss DevOps, cloud automation, and cutting-edge technology

๐Ÿ”— LinkedIn | ๐Ÿ’ผ Upwork | ๐Ÿฆ Twitter | ๐Ÿ‘จโ€๐Ÿ’ป GitHub

6
Subscribe to my newsletter

Read articles from Praduman Prajapati directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Praduman Prajapati
Praduman Prajapati

Bridging the gap between development and operations. Hey there! Iโ€™m a DevOps Engineer passionate about automation, cloud technologies, and making infrastructure scalable and efficient. I specialize in CI/CD, cloud automation, and infrastructure optimization, working with tools like AWS, Kubernetes, Terraform, Docker, Jenkins, and Ansible to streamline development and deployment processes. I also love sharing my knowledge through blogs on DevOps, Kubernetes, and cloud technologiesโ€”breaking down complex topics into easy-to-understand insights. Letโ€™s connect and talk all things DevOps!