Real Project: Automated Secure Software Delivery with Jenkins, Kubernetes, Argocd, Prometheus-Grafana & Full-Stack Observability

Table of contents
- Architecture Overview
- Key Technologies Used
- Environment Setup Guide
- Jenkins Pipeline Configuration
- Email Notification Setup
- Step 2: Implementing the Jenkins CI/CD Pipeline
- Step 4: Setting Up Amazon EKS and ArgoCD
- Step 5: Create Dockerfile & Kubernetes Manifest.
- Step 6: Observability with Prometheus and Grafana
- Grafana Dashboard
- Monitor Kubernetes with Prometheus
Modern software development requires security to be integrated throughout the entire pipeline rather than being an afterthought. This project demonstrates a comprehensive DevSecOps implementation that deploys an Amazon Prime clone application securely while providing robust monitoring and automated deployment.
By integrating security scanning, containerization, and continuous deployment with GitOps principles, this pipeline ensures that vulnerabilities are caught early in the development lifecycle while maintaining high deployment velocity.
Architecture Overview
The implemented architecture follows these key principles:
Security-first approach: Multiple security scanning layers throughout the pipeline
Infrastructure as Code: All infrastructure defined and versioned in Git
GitOps workflow: Automated deployments triggered by Git changes
Comprehensive observability: Monitoring and alerting across all components
Pipeline Architecture Diagram
Key Technologies Used
CI/CD: Jenkins
Containerization: Docker
Security Scanning: SonarQube, Trivy, Docker Scout
Container Orchestration: Amazon EKS (Kubernetes)
GitOps Deployment: ArgoCD
Monitoring: Prometheus, Grafana
Environment Setup Guide
1. Jenkins Server Configuration
Our pipeline begins with a properly configured Jenkins server that orchestrates the entire CI/CD process.
Jenkins installation for continuous integration. Access it on <vm_ip>:8080
# Update system packages |
2. Docker Configuration
Docker enables containerization of our application, ensuring consistency across environments.
sudo apt install docker.io |
3. SonarQube Deployment
SonarQube provides static code analysis to identify code quality issues and security vulnerabilities.
docker run -d --name Sonar-Qube -p 9000:9000 sonarqube:lts-community |
4. Trivy Installation
Trivy scans container images for vulnerabilities in the operating system packages and application dependencies.
sudo apt-get install wget apt-transport-https gnupg |
5. Docker Scout Configuration
Docker Scout analyzes the software supply chain and generates a Software Bill of Materials (SBOM).
docker login -u <username> -p <password> |
Additional tools introduced in this project include:
Docker Scout for software composition analysis (SCA) and generating SBOM (Software Bill of Materials).
Eclipse Temurin Installer for managing JDK versions.
Prometheus Metrics Plugin for monitoring Jenkins performance.
Email Extension Plugin for automated notifications.
Jenkins Pipeline Configuration
Required Jenkins Plugins
To enable our comprehensive pipeline functionality, we need to install these essential plugins:
Docker Pipeline
SonarQube Scanner
NodeJS
Email Extension
Prometheus metrics
Kubernetes CLI
2. Configured tools:
JDK (Temurin JDK 17) for Java-based SonarQube scanner.
Node.js (version 16) for JavaScript dependencies.
Docker: Configure installation for Docker.
Credential Configuration
Set up the following credentials in Jenkins for secure integrations:
GitHub Token: For repository access and webhook integration
DockerHub Credentials: For pushing container images
SonarQube Token: For code analysis authentication
Email Credentials: For notification delivery
GitHub: Create a personal access token (Settings > Developer Settings > Personal Access Tokens) and add it in Jenkins.
Go back to Jenkins and create new secret for storing it.
DockerHub: Add your Docker Hub Username and password in Jenkins.
SonarQube: Generate a token in the SonarQube dashboard and add it in Jenkins.
Go to Administration > Security > Users > Administrator > Tokens and generate new token and copy it.
Create credentials named sonar-token and paste your token there.
EMAIL CRED: Go to your google account app password and create new password and copy it.
Now come back to Jenkins credentials and create new cred.
SonarQube Integration
Generate a token in SonarQube (Administration → Security → Users → Administrator → Tokens)
Configure Jenkins with the SonarQube URL and token
Set up the webhook in SonarQube (Administration → Configuration → Webhooks)
- URL: <Jenkins-URL>/sonarqube-webhook
4. Configured SonarQube URL and token in Jenkins system settings. Go to Dashboard > Manage Jenkins > System.
Email Notification Setup
To send build notifications:
Configured SMTP server settings in Jenkins.
Enabled “Extended Email Notifications” in the pipeline for customizable HTML reports.
Step 2: Implementing the Jenkins CI/CD Pipeline
pipeline{ |
Pipeline Stages
The following pipeline stages were implemented:
- Clean Workspace: Ensures that each build starts with a clean slate to avoid conflicts.
cleanWs() |
2. Git Checkout: Clones the repository’s main branch using GitHub credentials.
git branch: 'main', credentialsId: 'git-credential', url: 'https://github.com/subrotosharma/devsecops-automation-hub.git' |
3. SonarQube Analysis Scans the codebase for bugs, vulnerabilities, and code smells using SonarQube.
withSonarQubeEnv('sonar-scanner') { |
4. Quality Gate Waits for SonarQube’s quality gate results to ensure the code meets security and quality standards.
script { |
Quality gates in SonarQube are a set of conditions that determine if a project’s code meets the required quality level. With this step, we need to done one more step, i.e, adding a webhook in sonarqube.
When using webhooks in SonarQube and Jenkins, a webhook is configured in SonarQube to call back into Jenkins. This allows the pipeline to either continue or fail based on the results of the analysis.
Go to Administration > Configuration > Webhooks.
URL -> <Jenkins-URL>/sonarqube-webhook .
5. Installing Dependencies: Installs all application dependencies.
sh 'npm install' |
6. Trivy Scan: Performs a file system scan for vulnerabilities and outputs results to a file.
sh 'trivy fs . > trivy-output.txt' |
7. Build Docker Image: Builds a container image for the application with a unique tag.
withDockerRegistry(credentialsId: 'docker-cred') { |
8. Push Docker Image: Pushes the built image to DockerHub.
withDockerRegistry(credentialsId: 'docker-cred') { |
9. Docker Scout Image Analysis: Utilizes Docker Scout for supply chain security, checking for:
Vulnerabilities (CVEs).
Recommendations for better security practices.
withDockerRegistry(credentialsId: 'docker-cred') { |
10. Testing Deploy to Docker Container: Deploys the container locally for functional testing.
withDockerRegistry(credentialsId: 'docker-cred') { |
11. Deployment to Production: Updates Kubernetes manifests with the latest image tag, commits them to GitHub, and triggers ArgoCD for deployment.
sed -i "s/replaceImageTag/${BUILD_NUMBER}/g" K8S-Production/Deployment.yaml |
12. Post-Build Email Notifications: Sends build status reports via email with logs and Trivy scan results.
Step 4: Setting Up Amazon EKS and ArgoCD
Amazon EKS
Provisioned a Kubernetes cluster using eksctl:
# Create Cluster |
# Get List of clusters |
To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity provider.
eksctl utils associate-iam-oidc-provider \ |
# Create Public Node Group
eksctl create nodegroup --cluster=amcdemo \
--region=us-east-1 \
--name=amcdemo-ng-public1 \
--node-type=t3.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=<public-key-name> \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access
Replace <name>, <region-code>, <cluster-name>, <public-key-name> with their appropriate values.
ArgoCD Deployment
- Installed ArgoCD in the EKS cluster.
# Create ArgoCD namespace
kubectl create namespace argocd |
# Install ArgoCD components
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml |
# Get initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d |
# Port forward to access ArgoCD UI
kubectl port-forward svc/argocd-server -n argocd 8080:443 |
Access Dashboard by port-forwarding. Copy password first.
2. Created an ArgoCD application pointing to the K8S-Production/ folder in the GitHub repo.
application.yaml
apiVersion: argoproj.io/v1alpha1 |
Step 5: Create Dockerfile & Kubernetes Manifest.
Create Dockerfile, Deployment.yaml, Service.yaml and push them to github repository.
Dockerfile:
FROM node:alpine |
Deployment:
apiVersion: apps/v1 |
Service:
apiVersion: v1 |
Start Pipeline:
- Copy the hostname and see your application.
Step 6: Observability with Prometheus and Grafana
Prometheus Setup
- Installed Prometheus on a dedicated VM.
- First, create a dedicated Linux user for Prometheus and download Prometheus:
sudo useradd --system --no-create-home --shell /bin/false prometheus |
- Extract Prometheus files, move them, and create directories:
tar -xvf prometheus-2.47.1.linux-amd64.tar.gz |
- Set ownership for directories:
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/ |
- Create a systemd unit configuration file for Prometheus:
sudo vim /etc/systemd/system/prometheus.service |
- Add the following content to the prometheus.service file:
[Unit] |
Here’s a brief explanation of the key parts in this prometheus.service file:
User and Group specify the Linux user and group under which Prometheus will run.
ExecStart is where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml), the storage directory, and other settings.
web.listen-address configures Prometheus to listen on all network interfaces on port 9090.
web.enable-lifecycle allows for management of Prometheus through API calls.
Enable and start Prometheus:
sudo systemctl enable prometheus |
Verify Prometheus’s status:
sudo systemctl status prometheus |
You can access Prometheus in a web browser using your server’s IP and port 9090:
http://<your-server-ip>:9090 |
Configured node exporter to extract system-level metrics.
Create a system user for Node Exporter and download Node Exporter:
sudo useradd --system --no-create-home --shell /bin/false node_exporter |
Extract Node Exporter files, move the binary, and clean up:
tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz |
Create a systemd unit configuration file for Node Exporter:
sudo vim /etc/systemd/system/node_exporter.service |
Add the following content to the node_exporter.service file:
[Unit] |
Enable and start Node Exporter:
sudo systemctl enable node_exporter |
Verify the Node Exporter’s status:
sudo systemctl status node_exporter |
You can access Node Exporter metrics in Prometheus at <your-server-ip>:9100
Added Prometheus scraping jobs in ‘/etc/prometheus.yml’ for:
Node Exporter:
- job_name: 'node_exporter' |
Jenkins:
- job_name: 'jenkins' |
The metrics path will be /metrics if not mentioned.(By-default)
Check the validity of the configuration file:
promtool check config /etc/prometheus/prometheus.yml |
Reload the Prometheus configuration without restarting:
curl -X POST http://localhost:9090/-/reload |
You can access Prometheus targets at:
http://<your-prometheus-ip>:9090/targets
Grafana Dashboard
Installed Grafana on the same VM as Prometheus.
First, ensure that all necessary dependencies are installed:
sudo apt-get update |
Add the GPG key for Grafana:
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add - |
Add the repository for Grafana stable releases:
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list |
Install Grafana:
sudo apt-get -y install grafana |
To automatically start Grafana after a reboot, enable the service:
sudo systemctl enable grafana-server |
Then, start Grafana:
sudo systemctl start grafana-server |
Verify the status of the Grafana service to ensure it’s running correctly:
sudo systemctl status grafana-server |
Open a web browser and navigate to Grafana using your server’s IP address. The default port for Grafana is 3000.
http://<your-server-ip>:3000 |
2. Configured Prometheus as a data source.
Add Your Prometheus Server URL.
- Click on Save
3. Created dashboards for:
Jenkins CI/CD performance:
Paste the Jenkins Dashboard ID 9964
Now you can view your dashboard.
Monitor Kubernetes with Prometheus
Prometheus is a powerful monitoring and alerting toolkit, and you’ll use it to monitor your Kubernetes cluster. Additionally, you’ll install the node exporter using Helm to collect metrics from your cluster nodes.
Install Node Exporter using Helm
To begin monitoring your Kubernetes cluster, you’ll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:
- Add the Prometheus Community Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts |
2. Create a Kubernetes namespace for the Node Exporter:
kubectl create namespace prometheus-node-exporter |
3. Install the Node Exporter using Helm:
helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter |
4. Create a node-exporter-svc.yml to expose the prometheus-node-exporter service, so that Kubernetes node can be add as job in prometheus.
apiVersion: v1 |
Push it on the github repository in K8S-Production path, so that argocd automatically deploy the resource in cluster.
5. Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from k8S-nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:
- job_name: 'kubernetes' |
- Copy the K8s Node IP :
- Paste it inside /etc/prometheus.yml file.
Metrics path will remain default one, i.e, /metrics
Don’t forget to reload or restart Prometheus to apply these changes to your configuration.
curl -X POST http://localhost:9090/-/reload |
🔐 Enhancing Security with Docker Scout (Personalized Summary)
While integrating Docker Scout into my DevSecOps pipeline for the Amazon Prime Clone project, I gained hands-on experience with essential security practices:
1. Code & Dependency Awareness
Manual Code (SAST): I used SonarQube to scan developer-authored code for bugs and security flaws during CI.
External Libraries (SCA): Docker Scout helped analyze third-party and transitive dependencies (e.g., via npm), highlighting potential risks.
2. CVE Detection & Continuous Monitoring
Docker Scout continuously scanned for Common Vulnerabilities and Exposures (CVEs), even after deployment.
This ensured early detection and aligned with the "fail fast" DevSecOps principle.
3. Supply Chain Security with SBOM
I generated a Software Bill of Materials (SBOM) to list all components in my container image.
The SBOM enabled compliance, audit readiness, and proactive CVE tracking.
4. Docker Scout Workflow in CI/CD
Quick Overview: docker-scout quickview for image health.
Vulnerability Analysis: docker-scout cves for detailed CVE reports.
Remediation Advice: docker-scout recommendations for actionable fixes.
5. Post-Deploy Security (DAST)
After deploying containers, I performed runtime security analysis using Trivy and Docker Scout.
This helped catch misconfigurations and exposed APIs during actual execution.
6. Shift Left & Developer Remediation
By introducing security early in the lifecycle, I reduced risk and saved cost.
Docker Scout provided specific fixes like version bumps (e.g., Library A v1.2.3 → v1.3.0), which I verified by re-running the pipeline.
🎯 Key Outcome
Security was enforced at every stage—from code to runtime—without slowing development. Docker Scout empowered me to embed secure practices directly into my CI/CD workflow while maintaining supply chain integrity.
This hands-on implementation taught me how modern DevSecOps blends automation with actionable security to build production-ready, secure apps.
📌 Conclusion
This end-to-end DevSecOps pipeline for a Prime Video clone project is a complete implementation of modern, secure, and observable software delivery. It showcases best practices in CI/CD, security automation, and infrastructure observability.
Subscribe to my newsletter
Read articles from Subroto Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Subroto Sharma
Subroto Sharma
I'm a passionate and results-driven DevOps Engineer with hands-on experience in automating infrastructure, optimizing CI/CD pipelines, and enhancing software delivery through modern DevOps and DevSecOps practices. My expertise lies in bridging the gap between development and operations to streamline workflows, increase deployment velocity, and ensure application security at every stage of the software lifecycle. I specialize in containerization with Docker and Kubernetes, infrastructure-as-code using Terraform, and managing scalable cloud environments—primarily on AWS. I’ve worked extensively with tools like Jenkins, GitHub Actions, SonarQube, Trivy, and various monitoring/logging stacks to build secure, efficient, and resilient systems. Driven by automation and a continuous improvement mindset, I aim to deliver value faster and more reliably by integrating cutting-edge tools and practices into development pipelines.