Ultimate DevSecOps Project: End-to-End Kubernetes Three-Tier Deployment on AWS EKS with ArgoCD, Prometheus, Grafana & Jenkins

Table of contents
- Introduction: Why DevSecOps?
- Source Code & Repository 👇
- Step 1: Set Up a Jenkins Server on AWS EC2
- Step 2: Set Up a Jenkins Pipeline to Deploy EKS and Networking Services
- Step 3: Set Up the Jump Server
- Step 4: Set Up an AWS Load Balancer for EKS
- Step 5: Set Up and Configure ArgoCD on EKS
- Step 6: Configure SonarQube for DevSecOps Pipeline
- Step 6: Create an ECR Repository for Docker Images
- Install and Configure Essential Plugins & Tools in Jenkins
- Step 7: Create a Jenkins Pipeline for Frontend
- Step 8: Create a Jenkins Pipeline for Backend
- Step 9: Setup Application in ArgoCD
- Step 10: Configure Monitoring using Prometheus and Grafana
- Conclusion

Introduction: Why DevSecOps?
In today’s fast-paced tech world, speed and security go hand in hand. You can’t just build and deploy apps quickly—you need to keep them secure from day one. That’s where DevSecOps comes in! It blends development, security, and operations into one seamless process, ensuring that security is baked into every stage of the pipeline instead of being an afterthought.
This Ultimate DevSecOps Project is all about deploying a three-tier application on AWS EKS with a fully automated CI/CD pipeline. The goal? To make sure every piece of code is secure, high-quality, and production-ready before it even goes live.
What’s Inside This Project?
We’ll be using some of the best DevSecOps tools out there to make this happen:
✅ Jenkins – Automates the entire CI/CD pipeline.
✅ SonarQube & OWASP Dependency-Check – Keep the code clean, secure, and compliant.
✅ Trivy – Scans container images for security vulnerabilities before deployment.
✅ Terraform – Automates infrastructure setup on AWS.
✅ ArgoCD – Ensures Kubernetes deployments stay in sync with Git (GitOps).
✅ Prometheus & Grafana – Provide real-time monitoring and insights.
By the end of this project, you’ll have a fully functional, security-first DevSecOps pipeline that not only deploys applications but also keeps them safe, scalable, and efficient.
🚀 Ready to dive in? Let’s build something amazing!
Source Code & Repository 👇
You can find the source code for this project here:
Step 1: Set Up a Jenkins Server on AWS EC2
1. Log in to AWS and Launch an EC2 Instance
Go to the AWS Console.
Navigate to EC2 (Elastic Compute Cloud).
Click Launch Instance to create a new virtual machine.
2. Configure the EC2 Instance
AMI (Amazon Machine Image): Choose Ubuntu Server.
Instance Type: Select t2.2xlarge (8 vCPUs, 32GB RAM) for better performance.
Key Pair: No need to create a key pair (proceed without key pair).
3. Configure Security Group (Firewall Rules)
Set up inbound rules to allow required network traffic:
Port | Protocol | Purpose |
8080 | TCP | Jenkins Web UI (Restrict access to trusted IPs or internal network). |
50000 | TCP | Communication between Jenkins Controller and Agents (for distributed builds). |
443 | TCP | HTTPS access (if Jenkins is secured with SSL). |
80 | TCP | HTTP access (if using an Nginx reverse proxy for Jenkins). |
9000 | TCP | SonarQube Access (for code analysis). |
Note: For security, avoid opening all these ports to the public. Instead, restrict access to trusted IPs or internal networks.
- Choose the created
security group
in Network setting
4. Configure Storage and IAM Role
Storage: Set at least 30 GiB.
IAM Role: Attach an IAM profile with administrative access to allow Jenkins to manage AWS resources.
Create an IAM profile with Administrator Access and attach it to EC2 instance
5. Automate Installation with User Data
Instead of manually installing required tools, you can automate it using a User Data script. This script will automatically install:
Jenkins
Docker
Terraform
AWS CLI
SonarQube (running in a container)
Trivy (for security scanning)
#!/bin/bash
# For Ubuntu 22.04
# Intsalling Java
sudo apt update -y
sudo apt install openjdk-17-jre -y
sudo apt install openjdk-17-jdk -y
java --version
# Installing Jenkins
curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y
# Installing Docker
#!/bin/bash
sudo apt update
sudo apt install docker.io -y
sudo usermod -aG docker jenkins
sudo usermod -aG docker ubuntu
sudo systemctl restart docker
sudo chmod 777 /var/run/docker.sock
# If you don't want to install Jenkins, you can create a container of Jenkins
# docker run -d -p 8080:8080 -p 50000:50000 --name jenkins-container jenkins/jenkins:lts
# Run Docker Container of Sonarqube
#!/bin/bash
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
# Installing AWS CLI
#!/bin/bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install
# Installing Kubectl
#!/bin/bash
sudo apt update
sudo apt install curl -y
sudo curl -LO "https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
# Installing eksctl
#! /bin/bash
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
# Installing Terraform
#!/bin/bash
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install terraform -y
# Installing Trivy
#!/bin/bash
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt update
sudo apt install trivy -y
# Intalling Helm
#! /bin/bash
sudo snap install helm --classic
Why use User Data?
Automates setup.
Ensures all required tools are installed before first login.
Saves time compared to manual installation.
6. Launch the Instance
- Click Launch Instance to start your Jenkins server.
7. Connect to the Instance
Since SSH is disabled for security reasons, use the EC2 Instance Connect feature:
Go to the AWS EC2 Console.
Select your Jenkins instance.
Click the "Connect" button at the top.
Choose the "EC2 Instance Connect" tab.
Click "Connect" to open a web-based terminal directly in your browser.
8. Monitor Running Processes
To check what commands are running inside the instance, use:
htop
This command provides a real-time view of system performance and running processes.
Step 2: Set Up a Jenkins Pipeline to Deploy EKS and Networking Services
1. Access Jenkins
Open your browser and go to:
http://<public-ip-of-jenkins-server>:8080
Retrieve the initial Jenkins admin password by running the following command in the jenkins Instance:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Follow the setup wizard:
Install Suggested Plugins.
Create an Admin Username & Password.
Complete the basic configuration.
Now, Jenkins is ready to use.
2. Install Required Plugins
To enable Jenkins to work with AWS and Terraform, install the following plugins manually:
Click on "Manage Jenkins" > "Plugins Manager".
Search for and install these plugins:
AWS Credentials → Securely stores AWS access keys.
Pipeline: AWS Steps → Adds built-in AWS-specific pipeline steps.
Terraform → Enables Terraform automation in Jenkins.
Pipeline: Stage View → Provides a visual representation of pipeline stages.
3. Configure AWS Credentials in Jenkins
Go to "Manage Jenkins" > "Credentials".
Click "Global" > "Add Credentials".
Fill in the details:
Kind: AWS Credentials
Scope: Global
ID:
aws-creds
Access Key ID:
From your AWS IAM user
Secret Access Key:
From your AWS IAM user
Click "Create".
Note: Create an IAM User in AWS with Administrator Access and use its credentials here.
4. Configure Terraform in Jenkins
Go to "Manage Jenkins" > "Tools".
Scroll to Terraform Installation.
Provide:
Name:
terraform
Installation Directory: Find Terraform’s installation path by running below command in jenkins instance:
whereis terraform
Click Save.
5. Create a New Jenkins Pipeline
In this pipeline i will use this repository here all the terraform source code present to create production grade infrastructure
In Jenkins, go to Dashboard > New Item.
Enter an item name.
Select Pipeline and click OK.
Scroll to the Pipeline section:
Under Definition, select Pipeline script.
Copy and paste the following pipeline script:
properties([
parameters([
string(
defaultValue: 'dev',
name: 'Environment'
),
choice(
choices: ['plan', 'apply', 'destroy'],
name: 'Terraform_Action'
)
])
])
pipeline {
agent any
stages {
stage('Preparing') {
steps {
sh 'echo Preparing'
}
}
stage('Git Pulling') {
steps {
git branch: 'main', url: 'https://github.com/praduman8435/Production-ready-EKS-with-automation.git'
}
}
stage('Init') {
steps {
withAWS(credentials: 'aws-creds', region: 'ap-south-1') {
sh 'terraform -chdir=eks/ init'
}
}
}
stage('Validate') {
steps {
withAWS(credentials: 'aws-creds', region: 'ap-south-1') {
sh 'terraform -chdir=eks/ validate'
}
}
}
stage('Action') {
steps {
withAWS(credentials: 'aws-creds', region: 'ap-south-1') {
script {
if (params.Terraform_Action == 'plan') {
sh "terraform -chdir=eks/ plan -var-file=${params.Environment}.tfvars"
} else if (params.Terraform_Action == 'apply') {
sh "terraform -chdir=eks/ apply -var-file=${params.Environment}.tfvars -auto-approve"
} else if (params.Terraform_Action == 'destroy') {
sh "terraform -chdir=eks/ destroy -var-file=${params.Environment}.tfvars -auto-approve"
} else {
error "Invalid value for Terraform_Action: ${params.Terraform_Action}"
}
}
}
}
}
}
}
6. Finalize Pipeline Setup
Enable Groovy Sandbox: Check the box for "Use Groovy Sandbox".
Click "Save".
7. Run the Pipeline
Wait for a minute, then click "Build with Parameters".
Select a Terraform action (
plan
,apply
, ordestroy
).Click "Build".
Navigate to the Console Output to track progress.
What This Pipeline Does?
Connects Jenkins to AWS using stored credentials.
Fetches Terraform code from GitHub.
Initializes Terraform for EKS and networking setup.
Validates Terraform code before deployment.
Executes Terraform actions based on user selection (
plan
,apply
, ordestroy
)
Step 3: Set Up the Jump Server
Why Do You Need a Jump Server?
Since your EKS cluster is inside a VPC, it cannot be accessed directly from the internet. A Jump Server (Bastion Host) acts as a secure gateway, allowing access to private resources within your VPC.
How It Works:
Your EKS cluster and other private resources don’t have public IPs, so they can't be accessed directly.
Instead of exposing these private resources, you connect to a Jump Server first.
The Jump Server has a public IP and is placed in a public subnet, acting as an intermediary to access the private cluster securely.
1. Create a Jump Server in AWS
Go to AWS EC2 Console and click "Launch Instance".
Configure the Instance:
Instance Name:
jump-server
AMI: Ubuntu
Instance Type:
t2.medium
Key Pair: No need to attach a key pair (SSH disabled for security).
Network Settings:
VPC: Select the VPC created by the Jenkins Terraform pipeline.
Subnet: Choose any public subnet.
Storage: At least 30 GiB.
IAM Role: Attach an IAM profile with administrative access.
Install required tools on the Jump Server automatically, by adding the following script in the User Data field:
sudo apt update -y # Installing AWS CLI #!/bin/bash curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" sudo apt install unzip -y unzip awscliv2.zip sudo ./aws/install # Installing Kubectl #!/bin/bash sudo apt update sudo apt install curl -y sudo curl -LO "https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl" sudo chmod +x kubectl sudo mv kubectl /usr/local/bin/ kubectl version --client # Intalling Helm #! /bin/bash sudo snap install helm --classic # Installing eksctl #! /bin/bash curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin eksctl version
Launch the Instance by clicking "Launch Instance".
2. Connect to the Jump Server and Verify Access
Once the instance is running, access it using EC2 Instance Connect:
Go to AWS EC2 Console.
Select your Jump Server instance.
Click "Connect" > "EC2 Instance Connect".
Click "Connect" to open a web terminal.
3. Configure AWS Credentials on the Jump Server
To allow the Jump Server to interact with AWS services, configure AWS CLI:
aws configure
Enter AWS Access Key ID (from IAM user).
Enter AWS Secret Access Key (from IAM user).
Default region: Set your AWS region (e.g.,
us-east-1
).Output format: Press Enter (default is
json
).
4. Update kubeconfig to Access the EKS Cluster
Run the following command to configure kubectl
for your EKS cluster:
aws eks update-kubeconfig --name <your-eks-cluster-name> --region <your-region>
5. Verify the Cluster Connection
Check if the Jump Server can access the EKS cluster by listing the worker nodes:
kubectl get nodes
If you see the nodes, your Jump Server setup is successful! 🎉
Step 4: Set Up an AWS Load Balancer for EKS
In order to configure an AWS Load Balancer in our EKS cluster, we need a Service Account that allows Kubernetes to create and manage the load balancer automatically.
1. Create an IAM-Backed Service Account
The AWS Load Balancer Controller requires an IAM role with the necessary permissions to create and manage Elastic Load Balancers (ELB) in AWS.
Run the following command to create a service account with the required IAM role inside the EKS cluster:
eksctl create iamserviceaccount \
--cluster=<eks-cluster-name> \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerRole \
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve \
--region=ap-south-1
📌 Explanation:
--cluster=<eks-cluster-name>
→ Name of your EKS cluster.--namespace=kube-system
→ Deploys the service account in thekube-system
namespace.--name=aws-load-balancer-controller
→ Creates a service account with this name.--role-name AmazonEKSLoadBalancerRole
→ Assigns an IAM role to the service account.--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy
→ Attaches the AWS Load Balancer Controller IAM policy.--approve
→ Automatically applies the changes.
2. Add the AWS EKS Helm Repository
To deploy the AWS Load Balancer Controller, we use Helm, a package manager for Kubernetes.
Add the official AWS EKS Helm repository:
helm repo add eks https://aws.github.io/eks-charts
This repository contains pre-packaged Helm charts for essential AWS EKS components such as:
✅ AWS Load Balancer Controller
✅ EBS CSI Driver (for dynamic volume provisioning)
✅ VPC CNI Plugin (for networking enhancements)
✅ Cluster Autoscaler (for automatic scaling)
Update the repository to get the latest charts:
helm repo update
3. Install the AWS Load Balancer Controller
Now, install the AWS Load Balancer Controller using Helm:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=<eks-cluster-name> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
📌 Explanation:
--namespace kube-system
→ Deploys the controller in thekube-system
namespace.--set clusterName=<eks-cluster-name>
→ Associates it with your EKS cluster.--set serviceAccount.create=false
→ Uses the existing IAM-backed service account.--set serviceAccount.name=aws-load-balancer-controller
→ Specifies the service account created earlier.
4. Verify the AWS Load Balancer Controller
Check if the Load Balancer Controller is running correctly:
kubectl get pods -n kube-system | grep aws-load-balancer-controller
4. Fixing Pods in Error or CrashLoopBackOff
If your AWS Load Balancer Controller pods are in Error or CrashLoopBackOff, it’s likely due to misconfiguration. To fix this, upgrade the Helm release with the correct settings:
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=<cluster-name> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=<your-region> \
--set vpcId=<your-vpc-id> \
-n kube-system
<your-vpc-id>
: VPC ID where your EKS cluster runs (e.g.,vpc-0123456789abcdef0
).<cluster-name>
: Your EKS cluster name (e.g.,dev-medium-eks-cluster
).<your-region>
: AWS region of your cluster (e.g.,us-west-1
).
What This Does?
Upgrades/Installs: Updates the Helm release or installs it if missing.
Configures Correctly: Ensures the controller uses the right cluster, service account, region, and VPC.
Check if the pods are running:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
🚀 Your AWS Load Balancer Controller is now ready to manage Kubernetes services! 🎉
Step 5: Set Up and Configure ArgoCD on EKS
ArgoCD is a GitOps continuous delivery tool that automates the deployment of applications to Kubernetes. We will install ArgoCD in our EKS cluster and expose its UI for external access.
1. Create a Separate Namespace for ArgoCD
To keep ArgoCD components organized, create a dedicated namespace:
kubectl create namespace argocd
2. Install ArgoCD Using Manifests
Apply the official ArgoCD installation YAML to deploy its components:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
This will install all necessary ArgoCD components inside the argocd
namespace.
3. Verify ArgoCD Installation
Check if all ArgoCD pods are running:
kubectl get pods -n argocd
4. Expose ArgoCD Server
By default, ArgoCD runs as a ClusterIP service, meaning it is only accessible inside the cluster. To access the UI externally, change it to a LoadBalancer service:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
This will assign a public IP to the ArgoCD server, making it accessible via an Elastic Load Balancer (ELB) in AWS.
5. Retrieve the External IP of ArgoCD
Run the following command to get the external URL:
kubectl get svc -n argocd argocd-server
Look for the EXTERNAL-IP in the output. This is the URL you’ll use to access the ArgoCD UI.
6. Get the ArgoCD Admin Password
By default, ArgoCD generates an admin password stored as a Kubernetes secret. Retrieve it using:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
Use this password to log in as the admin
user.
7. Access the ArgoCD UI
Now, you can access the UI of argoCD using the elastic loadbalancer created on the aws
Login using:
Username:
admin
Password: (retrieved from the secret above)
ArgoCD is now ready! You can start managing your Kubernetes deployments using GitOps. 🚀
Step 6: Configure SonarQube for DevSecOps Pipeline
SonarQube is a crucial tool for static code analysis, ensuring code quality and security in your DevSecOps pipeline. We will configure it within Jenkins for automated code scanning.
1. Verify if SonarQube is Running
Since SonarQube is running as a Docker container on the Jenkins server, check its status with:
docker ps
You should see a running SonarQube container exposed on port 9000.
2. Access SonarQube UI
Open any web browser and visit:
http://<public-ip-of-jenkins-server>:9000
Log in using the default credentials:
Username:
admin
Password:
admin
Once logged in, set a new password for security.
3. Generate an Authentication Token
Jenkins needs a token to authenticate with SonarQube for automated scans.
Go to Administration → Security → Users.
Click on Update Token.
Provide a name and set an expiration date (or leave it as "No Expiration").
Click Generate Token.
Copy and save the token securely (you will need it for Jenkins).
4. Create a Webhook for Jenkins Notifications
A webhook will notify Jenkins once SonarQube completes an analysis.
Navigate to Administration → Configuration → Webhooks.
Click Create Webhook.
Enter the details:
Name:
Jenkins Webhook
URL:
http://<public-ip-of-jenkins-server>:8080/sonarqube-webhook
Secret: (Leave blank)
Click Create.
Now, the webhook will trigger Jenkins when a project analysis is complete.
5. Create a SonarQube Project for Code Analysis
SonarQube will analyze the frontend and backend code separately.
Frontend Analysis Configuration
Go to Projects → Manually → Create a New Project.
Fill in the required details (Project Name, Key, etc.).
Click Setup.
Choose Analyze Locally.
Select Use an Existing Token and paste the token generated earlier.
Choose Other if your build type is not listed.
Select OS: Linux.
SonarQube will generate a command for analysis—copy and save it.
Add the command to your Jenkins pipeline
Backend Analysis Configuration
Repeat the same steps for the backend project:
Go to Projects → Create a New Project.
Fill in the required details.
Click Setup → Analyze Locally.
Use the previously generated token.
Choose Other as the build type if needed.
Select OS: Linux.
Copy the generated analysis command and save it.
Add the command to your Jenkins pipeline
6. Final Verification
At this point:
✅ SonarQube is running and accessible.
✅ Jenkins has an authentication token to interact with SonarQube.
✅ A webhook is set up to notify Jenkins about completed scans.
✅ Projects are created, and analysis commands are ready for Jenkins execution.
Now, whenever Jenkins runs the pipeline, SonarQube will analyze the code and report quality & security issues. 🎯 ✅
Step 6: Create an ECR Repository for Docker Images
Amazon Elastic Container Registry (ECR) will store the frontend and backend Docker images used for deployment. Let's configure it and store necessary credentials in Jenkins.
1. Create Private ECR Repositories
Open AWS Console and navigate to the ECR service.
Click Create Repository.
Choose Private Repository.
Repository Name:
frontend
.Repeat the same steps to create a
backend
repository.
2. Store Credentials in Jenkins
To integrate Jenkins with SonarQube, AWS ECR, and GitHub, we need to store various credentials securely.
a) Store SonarQube Token in Jenkins
Go to Jenkins Dashboard → Manage Jenkins → Credentials.
Select the appropriate Global credentials domain.
Click Add Credentials and fill in the details:
Kind: Secret Text
Scope: Global
Secret:
<sonar-qube-token>
ID:
sonar-token
Click Create.
b) Store AWS Account ID in Jenkins
Go to Credentials → Add Credentials.
Enter the details:
Kind: Secret Text
Scope: Global
Secret:
<AWS-Account-ID>
ID:
Account_ID
Click Create.
c) Store ECR Repository Names in Jenkins
For Frontend Repository:
Add New Credential:
Kind: Secret Text
Scope: Global
Secret:
frontend
ID:
ECR_REPO1
Click Create.
For Backend Repository:
Add New Credential:
Kind: Secret Text
Scope: Global
Secret:
backend
ID:
ECR_REPO2
Click Create.
d) Store GitHub Credentials in Jenkins
Add New Credential:
Kind: Username with Password
Scope: Global
Username:
<GitHub-Username>
Password:
<Personal-Access-Token>
ID:
GITHUB-APP
e) Store GitHub Personal Access Token in Jenkins
Add New Credential:
Kind: Secret Text
Scope: Global
Secret:
<Personal-Access-Token>
ID:
github
here all required credentials are get added
Final Confirmation
✅ ECR Repositories Created
✅ SonarQube Token Stored in Jenkins
✅ AWS Account ID Saved
✅ ECR Repository Names Stored
✅ GitHub Credentials & Token Added
With these credentials configured, Jenkins can authenticate and push Docker images to AWS ECR seamlessly. 🎯
Install and Configure Essential Plugins & Tools in Jenkins
To ensure seamless containerized builds, security analysis, and CI/CD automation, install and configure the necessary Jenkins plugins and tools.
1. Install Required Plugins
Navigate to Jenkins Dashboard → Manage Jenkins → Plugins → Available Plugins, then search and install the following:
✅ Docker – Enables Docker integration.
✅ Docker Pipeline – Provides Docker support in Jenkins pipelines.
✅ Docker Commons – Manages shared Docker images.
✅ Docker API – Allows interaction with Docker daemon.
✅ NodeJS – Supports Node.js builds.
✅ OWASP Dependency-Check – Detects security vulnerabilities in dependencies.
✅ SonarQube Scanner – Enables code quality analysis with SonarQube.
Once installed, restart Jenkins to apply changes.
2. Configure Essential Tools in Jenkins
a) NodeJS Installation
Go to Manage Jenkins → Tools
Under NodeJS, click Add NodeJS.
Fill in the required details.
Check the box Install Automatically.
Click Save.
b) SonarQube Scanner Installation
Under Tools Configuration, go to SonarQube Scanner.
Click Add SonarQube Scanner.
Fill in the required details.
Check Install Automatically.
Click Save.
c) OWASP Dependency Check Installation
Under Tools Configuration, go to Dependency Check.
Click Add Dependency Check.
Check Install Automatically from GitHub.
Click Save.
d) Docker Installation
Under Tools Configuration, go to Docker.
Click Add Docker.
Fill in the required details.
Check Install Automatically from Docker.com.
Click Save & Apply.
3. Configure SonarQube Webhook in Jenkins
To enable SonarQube notifications in Jenkins, configure the webhook.
Add SonarQube Server in Jenkins
Navigate to Manage Jenkins → Configure System.
Scroll to the SonarQube installation section.
Click Add SonarQube and enter:
Name:
sonar-server
Server URL:
http://<public-ip-of-jenkins-server>:9000
Server Authentication Token:
<sonar-qube-token-credential-name>
Click Apply & Save.
Jenkins is now fully equipped to handle Docker builds, security analysis, and SonarQube scanning in the DevSecOps pipeline. 🚀
Step 7: Create a Jenkins Pipeline for Frontend
This pipeline automates the frontend build, security analysis, Docker image creation, and deployment updates for the DevSecOps pipeline.
1. Create a New Pipeline in Jenkins
Navigate to Jenkins Dashboard → New Item.
Enter a Pipeline Name (e.g.,
frontend-pipeline
).Select Pipeline as the item type.
Click OK to proceed.
Scroll down to the Pipeline section and choose Pipeline script.
2. Add the Pipeline Script
Copy and paste the following Jenkinsfile:
pipeline {
agent any
tools {
nodejs 'nodejs'
}
environment {
SCANNER_HOME = tool 'sonar-scanner'
AWS_ACCOUNT_ID = credentials('Account_ID')
AWS_ECR_REPO_NAME = credentials('ECR_REPO1')
AWS_DEFAULT_REGION = 'ap-south-1'
REPOSITORY_URI = "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/"
}
stages {
stage('Cleaning Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git credentialsId: 'GITHUB-APP', url: 'https://github.com/praduman8435/DevSecOps-in-Action.git', branch: 'main'
}
}
stage('SonarQube Analysis') {
steps {
dir('frontend') {
withSonarQubeEnv('sonar-server') { // Use withSonarQubeEnv wrapper
sh '''
$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=frontend \
-Dsonar.projectKey=frontend \
-Dsonar.sources=.
'''
}
}
}
}
stage('Quality Check') {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
}
}
}
stage('OWASP Dependency-Check Scan') {
steps {
dir('Application-Code/backend') {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
}
stage('Trivy File Scan') {
steps {
dir('frontend') {
sh 'trivy fs . > trivyfs.txt'
}
}
}
stage('Docker Image Build') {
steps {
script {
dir('frontend') {
sh 'docker system prune -f'
sh 'docker container prune -f'
sh 'docker build -t ${AWS_ECR_REPO_NAME} .'
}
}
}
}
stage('ECR Image Pushing') {
steps {
script {
sh 'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'
sh 'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
sh 'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
}
}
}
stage('Trivy Image Scan') {
steps {
sh 'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} > trivyimage.txt'
}
}
stage('Update Deployment File') {
environment {
GIT_REPO_NAME = "DevSecOps-in-Action"
GIT_USER_NAME = "praduman8435"
}
steps {
dir('k8s-manifests/frontend') {
withCredentials([string(credentialsId: 'github', variable: 'GITHUB_TOKEN')]) {
sh '''
git config user.email "praduman.cnd@gmail.com"
git config user.name "praduman"
BUILD_NUMBER=${BUILD_NUMBER}
imageTag=$(grep -oP '(?<=frontend:)[^ ]+' deployment.yaml)
sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
git add deployment.yaml
git commit -m "Update deployment image to version ${BUILD_NUMBER}"
git push https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:main
'''
}
}
}
}
}
}
3. Build the Pipeline
Click Save & Apply.
Click Build Now to start the pipeline.
4. Verify SonarQube Analysis
Open SonarQube UI:
http://<public-ip-of-jenkins-server>:9000
Check if the SonarQube scan results appear in the UI under the frontend project.
Pipeline Workflow Summary
✅ Code Checkout from GitHub.
✅ SonarQube Scan for code quality analysis.
✅ Security Scans using OWASP Dependency-Check and Trivy.
✅ Docker Build & Push to Amazon ECR.
✅ Deployment Update in Kubernetes manifests.
The frontend pipeline is now fully automated and integrated into the DevSecOps workflow! 🚀
Step 8: Create a Jenkins Pipeline for Backend
This pipeline automates the backend build, security scanning, Docker image creation, and Kubernetes deployment updates for the DevSecOps pipeline.
1. Create a New Pipeline in Jenkins
Go to Jenkins Dashboard → New Item.
Enter a Pipeline Name (e.g.,
backend-pipeline
).Select Pipeline as the item type.
Click OK to proceed.
Scroll to the Pipeline section and choose Pipeline script.
2. Add the Pipeline Script
Copy and paste the following Jenkinsfile:
pipeline {
agent any
tools {
nodejs 'nodejs'
}
environment {
SCANNER_HOME = tool 'sonar-scanner'
AWS_ACCOUNT_ID = credentials('Account_ID')
AWS_ECR_REPO_NAME = credentials('ECR_REPO2')
AWS_DEFAULT_REGION = 'ap-south-1'
REPOSITORY_URI = "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/"
}
stages {
stage('Cleaning Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git credentialsId: 'GITHUB-APP', url: 'https://github.com/praduman8435/DevSecOps-in-Action.git', branch: 'main'
}
}
stage('SonarQube Analysis') {
steps {
dir('backend') {
withSonarQubeEnv('sonar-server') { // Use withSonarQubeEnv wrapper
sh '''
$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=backend \
-Dsonar.projectKey=backend \
-Dsonar.sources=.
'''
}
}
}
}
stage('Quality Check') {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
}
}
}
stage('OWASP Dependency-Check Scan') {
steps {
dir('backend') {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
}
stage('Trivy File Scan') {
steps {
dir('backend') {
sh 'trivy fs . > trivyfs.txt'
}
}
}
stage('Docker Image Build') {
steps {
script {
dir('backend') {
sh 'docker system prune -f'
sh 'docker container prune -f'
sh 'docker build -t ${AWS_ECR_REPO_NAME} .'
}
}
}
}
stage('ECR Image Pushing') {
steps {
script {
sh 'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'
sh 'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
sh 'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
}
}
}
stage('Trivy Image Scan') {
steps {
sh 'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} > trivyimage.txt'
}
}
stage('Update Deployment File') {
environment {
GIT_REPO_NAME = "DevSecOps-in-Action"
GIT_USER_NAME = "praduman8435"
}
steps {
dir('k8s-manifests/backend') {
withCredentials([string(credentialsId: 'github', variable: 'GITHUB_TOKEN')]) {
sh '''
git config user.email "praduman.cnd@gmail.com"
git config user.name "praduman"
BUILD_NUMBER=${BUILD_NUMBER}
imageTag=$(grep -oP '(?<=backend:)[^ ]+' deployment.yaml)
sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
git add deployment.yaml
git commit -m "Update deployment image to version ${BUILD_NUMBER}"
git push https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:main
'''
}
}
}
}
}
}
3. Build the Pipeline
Click Save & Apply.
Click Build Now to trigger the pipeline.
4. Verify SonarQube Analysis
Open SonarQube UI:
http://<public-ip-of-jenkins-server>:9000
Check the SonarQube scan results under the backend project.
Pipeline Workflow Summary
✅ Code Checkout from GitHub.
✅ SonarQube Scan for code quality analysis.
✅ Security Scans using OWASP Dependency-Check and Trivy.
✅ Docker Build & Push to Amazon ECR.
✅ Deployment Update in Kubernetes manifests.
The backend pipeline is now fully automated and integrated into the DevSecOps workflow! 🚀
Step 9: Setup Application in ArgoCD
In this step, we will deploy the application (frontend, backend, database, and ingress) to the EKS cluster using ArgoCD.
1. Open ArgoCD UI
Get the ArgoCD server External-IP:
kubectl get svc -n argocd argocd-server
Access the ArgoCD UI using EXTERNAL-IP in the output.
Login using
username
andpassword
created by you previously.
2. Connect GitHub Repository to ArgoCD
Go to Settings → Repositories.
Click "Connect Repository using HTTPS".
Enter:
Project:
default
Repository URL:
https://github.com/praduman8435/DevSecOps-in-Action.git
Authentication: None (if public repo)
Click "Connect".
3. Create Kubernetes Namespace for Deployment
Open terminal and run:
kubectl create namespace three-tier
Verify the namespace:
kubectl get namespaces
4. Deploy Database in ArgoCD
In ArgoCD UI, go to Applications → Click New Application.
Fill in the following details:
Application Name:
three-tier-database
Project Name:
default
Sync Policy:
Automatic
Repository URL:
https://github.com/praduman8435/DevSecOps-in-Action.git
Path:
k8s-manifests/database
Cluster URL:
https://kubernetes.default.svc
Namespace:
three-tier
Click Create.
5. Deploy Backend in ArgoCD
Go to Applications → Click New Application.
Fill in:
Application Name:
three-tier-backend
Project Name:
default
Sync Policy:
Automatic
Repository URL:
https://github.com/praduman8435/DevSecOps-in-Action.git
Path:
k8s-manifests/backend
Cluster URL:
https://kubernetes.default.svc
Namespace:
three-tier
Click Create.
6. Deploy Frontend in ArgoCD
Go to Applications → Click New Application.
Fill in:
Application Name:
three-tier-frontend
Project Name:
default
Sync Policy:
Automatic
Repository URL:
https://github.com/praduman8435/DevSecOps-in-Action.git
Path:
k8s-manifests/frontend
Cluster URL:
https://kubernetes.default.svc
Namespace:
three-tier
Click Create.
7. Deploy Ingress in ArgoCD
Go to Applications → Click New Application.
Fill in:
Application Name:
three-tier-ingress
Project Name:
default
Sync Policy:
Automatic
Repository URL:
https://github.com/praduman8435/DevSecOps-in-Action.git
Path:
k8s-manifests
Cluster URL:
https://kubernetes.default.svc
Namespace:
three-tier
Click Create.
8. Verify Deployment in ArgoCD
Go to Applications in ArgoCD UI.
Check if all applications are Synced and Healthy.
If needed, Manually Sync any pending application.
🎉 Congratulations! Your application is now fully deployed using ArgoCD! 🚀 and can be accessed at
http://3-111-158-0.nip.io/
Step 10: Configure Monitoring using Prometheus and Grafana
In this step, we will install and configure Prometheus and Grafana using Helm charts to monitor the Kubernetes cluster.
1. Add Helm Repositories for Prometheus & Grafana
Run the following commands to add and update the Helm repositories:
helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
2. Install Prometheus and Grafana using Helm
helm install prometheus prometheus-community/kube-prometheus-stack \
--set prometheus.server.persistentVolume.storageClass=gp2 \
--set alertmanager.alertmanagerSpec.persistentVolume.storageClass=gp2 \
3. Access Prometheus UI
Get the Prometheus service details:
kubectl get svc #look for prometheus-kube-prometheus-prometheus svc
Change the serveice type from
ClusterIP
toLoadbalancer
kubectl edit svc prometheus-kube-prometheus-prometheus
- Find the line
type: ClusterIP
and change it totype: LoadBalancer
.
- Find the line
You can now access the prometheus server using external-IP of
prometheus-kube-prometheus-prometheus
svckubectl get svc prometheus-kube-prometheus-prometheus
Open
<External-IP>:9999
in your browser.
Click on Status and select Target. You'll see a list of Targets displayed. In Grafana, we'll use this as a data source.
4. Access Grafana UI
Get the Grafana service details
kubectl get svc #look for the prometheus-grafana svc
By default, it uses
ClusterIP
. Change it toLoadBalancer
:kubectl edit svc prometheus-grafana
- Find the line
type: ClusterIP
and change it totype: LoadBalancer
.
- Find the line
Get the external IP of Grafana:
kubectl get svc prometheus-grafana
Open
<EXTERNAL-IP>
in your browser.
5. Get Grafana Admin Password
kubectl get secret grafana -n default -o jsonpath="{.data.admin-password}" | base64 --decode
Username:
admin
Password: (output from the above command)
6. Configure Prometheus as a Data Source in Grafana
Login to Grafana UI.
Go to Connections → Data Sources.
Click Data source → Select Prometheus
Provide the Prometheus URL (<prometheus-loadbalancer-dns>:9090) if not get provided.
Click Save & Test.
🎉 Congratulations! Your Kubernetes cluster is now being monitored using Prometheus & Grafana!
7. Setting Up Dashboards in Grafana for Kubernetes Monitoring
Grafana allows us to visualize Kubernetes cluster and resource metrics effectively. We’ll set up two essential dashboards to monitor our cluster using Prometheus as the data source.
Dashboard 1: Kubernetes Cluster Monitoring
This dashboard provides an overview of the Kubernetes cluster, including node health, resource usage, and workload performance.
Steps to Import the Dashboard:
Open the Grafana UI and navigate to Dashboards.
Click on New → Import.
In the Import via Grafana.com field, enter 6417 (Prometheus Kubernetes Cluster Monitoring Dashboard).
Click Load.
Select Prometheus as the data source.
Click Import.
You should now see a comprehensive dashboard displaying Kubernetes cluster metrics.
Dashboard 2: Kubernetes Resource Monitoring
This dashboard provides insights into individual Kubernetes resources such as pods, deployments, and namespaces.
Steps to Import the Dashboard:
Open the Grafana UI and navigate to Dashboards.
Click on New → Import.
Enter 17375 (Kubernetes Resources Monitoring Dashboard).
Click Load.
Select Prometheus as the data source.
select the data source i.e.
prometheus
and click onimport
Click Import.
Now, you have two powerful dashboards to monitor both the overall cluster health and specific Kubernetes resources in real-time.
Conclusion
This Ultimate DevSecOps Project is all about bringing security into the DevOps pipeline while deploying a scalable, secure, and fully automated three-tier application on AWS EKS. By combining the power of Jenkins, SonarQube, Trivy, OWASP Dependency-Check, Terraform, ArgoCD, Prometheus, and Grafana, we've built a robust CI/CD pipeline that ensures code quality, security, and smooth deployments—without any manual headaches!
With SonarQube and OWASP Dependency-Check, we keep our code secure and compliant. Trivy scans our Docker images before they even reach AWS ECR, blocking vulnerabilities before they hit production. Jenkins takes care of automation, while ArgoCD ensures our Kubernetes deployments stay in perfect sync. And of course, Prometheus and Grafana give us full visibility into system health and performance, so we're always on top of things.
This project isn't just a DevSecOps tutorial—it's a real-world playbook for modern software delivery. Whether you're a DevOps pro, security enthusiast, or just diving into cloud automation, this guide sets you up with the tools and best practices to master DevSecOps in Kubernetes.
🚀 Ready to take your DevSecOps game to the next level? Let’s build, secure, and deploy—without limits! 🔐🎯
Subscribe to my newsletter
Read articles from Praduman Prajapati directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Praduman Prajapati
Praduman Prajapati
Bridging the gap between development and operations. Hey there! I’m a DevOps Engineer passionate about automation, cloud technologies, and making infrastructure scalable and efficient. I specialize in CI/CD, cloud automation, and infrastructure optimization, working with tools like AWS, Kubernetes, Terraform, Docker, Jenkins, and Ansible to streamline development and deployment processes. I also love sharing my knowledge through blogs on DevOps, Kubernetes, and cloud technologies—breaking down complex topics into easy-to-understand insights. Let’s connect and talk all things DevOps!