Real Project: Automated Secure Software Delivery with Jenkins, Kubernetes, Argocd, Prometheus-Grafana & Full-Stack Observability

Subroto SharmaSubroto Sharma
19 min read

Modern software development requires security to be integrated throughout the entire pipeline rather than being an afterthought. This project demonstrates a comprehensive DevSecOps implementation that deploys an Amazon Prime clone application securely while providing robust monitoring and automated deployment.

By integrating security scanning, containerization, and continuous deployment with GitOps principles, this pipeline ensures that vulnerabilities are caught early in the development lifecycle while maintaining high deployment velocity.

Architecture Overview

The implemented architecture follows these key principles:

  • Security-first approach: Multiple security scanning layers throughout the pipeline

  • Infrastructure as Code: All infrastructure defined and versioned in Git

  • GitOps workflow: Automated deployments triggered by Git changes

  • Comprehensive observability: Monitoring and alerting across all components

Pipeline Architecture Diagram

Key Technologies Used

  • CI/CD: Jenkins

  • Containerization: Docker

  • Security Scanning: SonarQube, Trivy, Docker Scout

  • Container Orchestration: Amazon EKS (Kubernetes)

  • GitOps Deployment: ArgoCD

  • Monitoring: Prometheus, Grafana

Environment Setup Guide

1. Jenkins Server Configuration

Our pipeline begins with a properly configured Jenkins server that orchestrates the entire CI/CD process.

Jenkins installation for continuous integration. Access it on <vm_ip>:8080

# Update system packages
sudo apt update -y

# Install Java (requirement for Jenkins)
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | sudo tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y

# Install Jenkins
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update -y
sudo apt install jenkins -y
sudo systemctl start jenkins

2. Docker Configuration

Docker enables containerization of our application, ensuring consistency across environments.

sudo apt install docker.io
sudo systemctl enable docker
sudo systemctl start docker
sudo chmod 666 /var/run/docker.sock

3. SonarQube Deployment

SonarQube provides static code analysis to identify code quality issues and security vulnerabilities.

docker run -d --name Sonar-Qube -p 9000:9000 sonarqube:lts-community

4. Trivy Installation

Trivy scans container images for vulnerabilities in the operating system packages and application dependencies.

sudo apt-get install wget apt-transport-https gnupg
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb generic main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy

5. Docker Scout Configuration

Docker Scout analyzes the software supply chain and generates a Software Bill of Materials (SBOM).

docker login -u <username> -p <password>
curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s -- -b /usr/local/bin

Additional tools introduced in this project include:

  • Docker Scout for software composition analysis (SCA) and generating SBOM (Software Bill of Materials).

  • Eclipse Temurin Installer for managing JDK versions.

  • Prometheus Metrics Plugin for monitoring Jenkins performance.

  • Email Extension Plugin for automated notifications.

Jenkins Pipeline Configuration

Required Jenkins Plugins

To enable our comprehensive pipeline functionality, we need to install these essential plugins:

  1. Docker Pipeline

  2. SonarQube Scanner

  3. NodeJS

  4. Email Extension

  5. Prometheus metrics

  6. Kubernetes CLI

2. Configured tools:

JDK (Temurin JDK 17) for Java-based SonarQube scanner.

Node.js (version 16) for JavaScript dependencies.

Docker: Configure installation for Docker.

Credential Configuration

Set up the following credentials in Jenkins for secure integrations:

  • GitHub Token: For repository access and webhook integration

  • DockerHub Credentials: For pushing container images

  • SonarQube Token: For code analysis authentication

  • Email Credentials: For notification delivery

GitHub: Create a personal access token (Settings > Developer Settings > Personal Access Tokens) and add it in Jenkins.

Go back to Jenkins and create new secret for storing it.

DockerHub: Add your Docker Hub Username and password in Jenkins.

  • SonarQube: Generate a token in the SonarQube dashboard and add it in Jenkins.

  • Go to Administration > Security > Users > Administrator > Tokens and generate new token and copy it.

Create credentials named sonar-token and paste your token there.

  • EMAIL CRED: Go to your google account app password and create new password and copy it.

  • Now come back to Jenkins credentials and create new cred.

SonarQube Integration

  1. Generate a token in SonarQube (Administration → Security → Users → Administrator → Tokens)

  2. Configure Jenkins with the SonarQube URL and token

  3. Set up the webhook in SonarQube (Administration → Configuration → Webhooks)

    • URL: <Jenkins-URL>/sonarqube-webhook

4. Configured SonarQube URL and token in Jenkins system settings. Go to Dashboard > Manage Jenkins > System.

Email Notification Setup

To send build notifications:

  • Configured SMTP server settings in Jenkins.

  • Enabled “Extended Email Notifications” in the pipeline for customizable HTML reports.

Step 2: Implementing the Jenkins CI/CD Pipeline

pipeline{
 
  agent any
 
  tools{
      jdk 'jdk17'
      nodejs 'nodejs16'
  }
 
  environment{
      SONAR_HOME = tool 'sonar-scanner'
  }
 
  stages{
 
      stage('Clean Workspace'){
          steps{
              cleanWs()
          }
      }
 
      stage('Git Checkout'){
          steps{
              git branch: 'main', credentialsId: 'git-credential', url: 'https://github.com/subrotosharma/devsecops-automation-hub.git'
          }
      }
     
      stage('Sonarqube Analysis'){
          steps{
              withSonarQubeEnv('sonar-scanner'){
                  sh '$SONAR_HOME/bin/sonar-scanner -Dsonar.projectKey=amazonprime -Dsonar.projectName=amazonprime'
              }
          }
      }
     
      stage("Quality Gate"){
          steps{
              script{
                  waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
              }
          }
      }
     
      stage("Installing Dependecies"){
          steps{
              sh 'npm install'
          }
      }
     
      stage("Trivy scan"){
          steps{
              sh 'trivy fs . > trivy-output.txt'
          }
      }
     
      stage("Build Docker Image") {
          steps {
              script {
                  withDockerRegistry(credentialsId: 'docker-cred') {
                      sh 'docker build -t docker-username/amazon-prime:$BUILD_NUMBER .'
                  }
              }
          }
      }
     
      stage("Push Docker Image"){
          steps {
              script {
                  withDockerRegistry(credentialsId: 'docker-cred') {
                      sh 'docker push docker-username/amazon-prime:$BUILD_NUMBER'
                  }
              }
          }
      }
      stage("Docker-Scout Image"){
          steps{
              script{
                  withDockerRegistry(credentialsId: 'docker-cred'){
                      sh 'docker-scout quickview docker-username/amazon-prime:$BUILD_NUMBER'
                      sh 'docker-scout cves docker-username/amazon-prime:$BUILD_NUMBER'
                      sh 'docker-scout recommendations docker-username/amazon-prime:$BUILD_NUMBER'
                  }
              }
          }
      }
     
      stage("Testing Deploy to Docker Container"){
          steps{
              script{
                  withDockerRegistry(credentialsId: 'docker-cred'){
                      sh 'docker run -d --name prime-video -p 3001:3000  docker-username/amazon-prime:$BUILD_NUMBER' 
                  }
              }
          }
      }
     
      stage("Deployment to Production"){
          environment {
              GIT_REPO_NAME = "devsecops-automation-hub"
              GIT_USER_NAME = "git-username"
          }
          steps {
              withCredentials([string(credentialsId: 'git-cred', variable: 'GITHUB_TOKEN')]){
                  sh '''
                      git config user.email "user-email-id@gmail.com"
                      git config user.name "User Name"
                      BUILD_NUMBER=${BUILD_NUMBER}
                      cp Kubernetes-development/* K8S-Production/
                      sed -i "s/replaceImageTag/${BUILD_NUMBER}/g" K8S-Production/Deployment.yaml
                      git add K8S-Production/
                      git commit -m "Update Deployment Manifest for Production"
                      git push @github.com/${GIT_USER_NAME}/${GIT_REPO_NAME">https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:main
                  '''
              }
          }
      }
  }
  post {
  always {
      emailext attachLog: true,
          subject: "'${currentBuild.result}'",
          body: """
              <html>
              <body>
                  <div style="background-color: #FFA07A; padding: 10px; margin-bottom: 10px;">
                      <p style="color: white; font-weight: bold;">Project: ${env.JOB_NAME}</p>
                  </div>
                  <div style="background-color: #90EE90; padding: 10px; margin-bottom: 10px;">
                      <p style="color: white; font-weight: bold;">Build Number: ${env.BUILD_NUMBER}</p>
                  </div>
                  <div style="background-color: #87CEEB; padding: 10px; margin-bottom: 10px;">
                      <p style="color: white; font-weight: bold;">URL: ${env.BUILD_URL}</p>
                  </div>
              </body>
              </html>
          """,
          to: 'user-email-id@gmail.com',
          mimeType: 'text/html',
          attachmentsPattern: 'trivy-output.txt'
      }
  }
}

Pipeline Stages

The following pipeline stages were implemented:

  1. Clean Workspace: Ensures that each build starts with a clean slate to avoid conflicts.

cleanWs()

2. Git Checkout: Clones the repository’s main branch using GitHub credentials.

git branch: 'main', credentialsId: 'git-credential', url: 'https://github.com/subrotosharma/devsecops-automation-hub.git'

3. SonarQube Analysis Scans the codebase for bugs, vulnerabilities, and code smells using SonarQube.

withSonarQubeEnv('sonar-scanner') {
  sh '$SONAR_HOME/bin/sonar-scanner -Dsonar.projectKey=amazonprime -Dsonar.projectName=amazonprime'
}

4. Quality Gate Waits for SonarQube’s quality gate results to ensure the code meets security and quality standards.

script {
  waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
}

Quality gates in SonarQube are a set of conditions that determine if a project’s code meets the required quality level. With this step, we need to done one more step, i.e, adding a webhook in sonarqube.

When using webhooks in SonarQube and Jenkins, a webhook is configured in SonarQube to call back into Jenkins. This allows the pipeline to either continue or fail based on the results of the analysis.

Go to Administration > Configuration > Webhooks.

URL -> <Jenkins-URL>/sonarqube-webhook .

5. Installing Dependencies: Installs all application dependencies.

sh 'npm install'

6. Trivy Scan: Performs a file system scan for vulnerabilities and outputs results to a file.

sh 'trivy fs . > trivy-output.txt'

7. Build Docker Image: Builds a container image for the application with a unique tag.

withDockerRegistry(credentialsId: 'docker-cred') {
  sh 'docker build -t docker-username/amazon-prime:$BUILD_NUMBER .'
}

8. Push Docker Image: Pushes the built image to DockerHub.

withDockerRegistry(credentialsId: 'docker-cred') {
  sh 'docker push docker-username/amazon-prime:$BUILD_NUMBER'
}

9. Docker Scout Image Analysis: Utilizes Docker Scout for supply chain security, checking for:

  • Vulnerabilities (CVEs).

  • Recommendations for better security practices.

withDockerRegistry(credentialsId: 'docker-cred') {
  sh 'docker-scout quickview docker-username/amazon-prime:$BUILD_NUMBER'
  sh 'docker-scout cves docker-username/amazon-prime:$BUILD_NUMBER'
  sh 'docker-scout recommendations docker-username/amazon-prime:$BUILD_NUMBER'
}

10. Testing Deploy to Docker Container: Deploys the container locally for functional testing.

withDockerRegistry(credentialsId: 'docker-cred') {
  sh 'docker run -d --name prime-video -p 3001:3000 docker-username/amazon-prime:$BUILD_NUMBER'
}

11. Deployment to Production: Updates Kubernetes manifests with the latest image tag, commits them to GitHub, and triggers ArgoCD for deployment.

sed -i "s/replaceImageTag/${BUILD_NUMBER}/g" K8S-Production/Deployment.yaml
git add K8S-Production/
git commit -m "Update Deployment Manifest for Production"
git push ...

12. Post-Build Email Notifications: Sends build status reports via email with logs and Trivy scan results.

Step 4: Setting Up Amazon EKS and ArgoCD

Amazon EKS

Provisioned a Kubernetes cluster using eksctl:

# Create Cluster
eksctl create cluster --name=<name> \
                    --region=<region-code> \
                    --zones=ap-south-1a,ap-south-1b \
                    --without-nodegroup

# Get List of clusters
eksctl get cluster

To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity provider.

eksctl utils associate-iam-oidc-provider \
  --region region-code \
  --cluster <cluter-name> \
  --approve 

# Create Public Node Group

eksctl create nodegroup --cluster=amcdemo \
--region=us-east-1 \
--name=amcdemo-ng-public1 \
--node-type=t3.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=<public-key-name> \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access

Replace <name>, <region-code>, <cluster-name>, <public-key-name> with their appropriate values.

ArgoCD Deployment

  1. Installed ArgoCD in the EKS cluster.

# Create ArgoCD namespace

kubectl create namespace argocd

# Install ArgoCD components

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Get initial admin password

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

# Port forward to access ArgoCD UI

kubectl port-forward svc/argocd-server -n argocd 8080:443

Access Dashboard by port-forwarding. Copy password first.

2. Created an ArgoCD application pointing to the K8S-Production/ folder in the GitHub repo.

application.yaml

apiVersion: argoproj.io/v1alpha1
kind
: Application
metadata:
name: amazon-prime
namespace: argocd
spec:
project: default
destination:
  server: 'https://kubernetes.default.svc'
  namespace: default
source:
  repoURL: 'https://github.com/subrotosharma/devsecops-automation-hub.git'
  path: K8S-Production
  targetRevision: HEAD

Step 5: Create Dockerfile & Kubernetes Manifest.

Create Dockerfile, Deployment.yaml, Service.yaml and push them to github repository.

Dockerfile:

FROM node:alpine

# Create working directory
WORKDIR /app

# Copy package files to working directory
COPY *.json /app/

# Install the dependencies
RUN npm install

# Copy all the files on working directory
COPY . /app/

# Expose the app on 3000 port number
EXPOSE 3000

# command to start app
CMD ["npm", "start"]

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: amazon-prime
namespace: default
labels:
  app: prime-app
spec:
selector:
  matchLabels:
    app: prime-app
replicas: 2
template:
  metadata:
    name: amazon-prime
    labels:
      app: prime-app
  spec:
    containers:
    - name: prime-container
      image: docker-username/amazon-prime:replaceImageTag
      ports:
      - containerPort: 3000

Service:

apiVersion: v1
kind: Service
metadata:
name: prime-service
labels:
  app: prime-service
spec:
selector:
    app: prime-app
ports:
  - port: 3000
    targetPort: 3000
type: LoadBalancer

Start Pipeline:

  • Copy the hostname and see your application.

Step 6: Observability with Prometheus and Grafana

Prometheus Setup

  1. Installed Prometheus on a dedicated VM.
  • First, create a dedicated Linux user for Prometheus and download Prometheus:

sudo useradd --system --no-create-home --shell /bin/false prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz

  • Extract Prometheus files, move them, and create directories:

tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
cd prometheus-2.47.1.linux-amd64/
sudo mkdir -p /data /etc/prometheus
sudo mv prometheus promtool /usr/local/bin/
sudo mv consoles/ console_libraries/ /etc/prometheus/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml

  • Set ownership for directories:

sudo chown -R prometheus:prometheus /etc/prometheus/ /data/

  • Create a systemd unit configuration file for Prometheus:

sudo vim /etc/systemd/system/prometheus.service

  • Add the following content to the prometheus.service file:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle

[Install]
WantedBy=multi-user.target

Here’s a brief explanation of the key parts in this prometheus.service file:

  • User and Group specify the Linux user and group under which Prometheus will run.

  • ExecStart is where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml), the storage directory, and other settings.

  • web.listen-address configures Prometheus to listen on all network interfaces on port 9090.

  • web.enable-lifecycle allows for management of Prometheus through API calls.

Enable and start Prometheus:

sudo systemctl enable prometheus
sudo systemctl start prometheus

Verify Prometheus’s status:

sudo systemctl status prometheus

You can access Prometheus in a web browser using your server’s IP and port 9090:

http://<your-server-ip>:9090

Configured node exporter to extract system-level metrics.

Create a system user for Node Exporter and download Node Exporter:

sudo useradd --system --no-create-home --shell /bin/false node_exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

Extract Node Exporter files, move the binary, and clean up:

tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz
sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*

Create a systemd unit configuration file for Node Exporter:

sudo vim /etc/systemd/system/node_exporter.service

Add the following content to the node_exporter.service file:

[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]

User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter --collector.logind

[Install]

WantedBy=multi-user.target

Enable and start Node Exporter:

sudo systemctl enable node_exporter
sudo systemctl start node_exporter

Verify the Node Exporter’s status:

sudo systemctl status node_exporter

You can access Node Exporter metrics in Prometheus at <your-server-ip>:9100

Added Prometheus scraping jobs in ‘/etc/prometheus.yml’ for:

Node Exporter:

- job_name: 'node_exporter'
  static_configs:
    - targets: ['localhost:9100']

Jenkins:

- job_name: 'jenkins'
metrics_path: '/prometheus'
static_configs:
- targets: ['<jenkins-ip>:8080']

The metrics path will be /metrics if not mentioned.(By-default)

Check the validity of the configuration file:

promtool check config /etc/prometheus/prometheus.yml

Reload the Prometheus configuration without restarting:

curl -X POST http://localhost:9090/-/reload

You can access Prometheus targets at:

http://<your-prometheus-ip>:9090/targets

Grafana Dashboard

Installed Grafana on the same VM as Prometheus.

First, ensure that all necessary dependencies are installed:

sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-common

Add the GPG key for Grafana:

wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

Add the repository for Grafana stable releases:

echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list

Install Grafana:

sudo apt-get -y install grafana

To automatically start Grafana after a reboot, enable the service:

sudo systemctl enable grafana-server

Then, start Grafana:

sudo systemctl start grafana-server

Verify the status of the Grafana service to ensure it’s running correctly:

sudo systemctl status grafana-server

Open a web browser and navigate to Grafana using your server’s IP address. The default port for Grafana is 3000.

http://<your-server-ip>:3000

2. Configured Prometheus as a data source.

Add Your Prometheus Server URL.

  • Click on Save

3. Created dashboards for:

Jenkins CI/CD performance:

Paste the Jenkins Dashboard ID 9964

Now you can view your dashboard.

Monitor Kubernetes with Prometheus

Prometheus is a powerful monitoring and alerting toolkit, and you’ll use it to monitor your Kubernetes cluster. Additionally, you’ll install the node exporter using Helm to collect metrics from your cluster nodes.

Install Node Exporter using Helm

To begin monitoring your Kubernetes cluster, you’ll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:

  1. Add the Prometheus Community Helm repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

2. Create a Kubernetes namespace for the Node Exporter:

kubectl create namespace prometheus-node-exporter

3. Install the Node Exporter using Helm:

helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter

4. Create a node-exporter-svc.yml to expose the prometheus-node-exporter service, so that Kubernetes node can be add as job in prometheus.

apiVersion: v1
kind: Service
metadata:
labels:
  app.kubernetes.io/instance: amazon-prime
name: node-exporter
namespace: prometheus-node-exporter
spec:
ports:
  - name: node-exporter
    port: 9100
    protocol: TCP
    targetPort: 9100
selector:
  app: node-exporter
type: NodePort

Push it on the github repository in K8S-Production path, so that argocd automatically deploy the resource in cluster.

5. Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from k8S-nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:

- job_name: 'kubernetes'
  metrics_path: '/metrics'
  static_configs:
    - targets: ['K8S-node1Ip:9100']

  • Copy the K8s Node IP :

  • Paste it inside /etc/prometheus.yml file.

  • Metrics path will remain default one, i.e, /metrics

  • Don’t forget to reload or restart Prometheus to apply these changes to your configuration.

curl -X POST http://localhost:9090/-/reload

🔐 Enhancing Security with Docker Scout (Personalized Summary)

While integrating Docker Scout into my DevSecOps pipeline for the Amazon Prime Clone project, I gained hands-on experience with essential security practices:

1. Code & Dependency Awareness

  • Manual Code (SAST): I used SonarQube to scan developer-authored code for bugs and security flaws during CI.

  • External Libraries (SCA): Docker Scout helped analyze third-party and transitive dependencies (e.g., via npm), highlighting potential risks.

2. CVE Detection & Continuous Monitoring

  • Docker Scout continuously scanned for Common Vulnerabilities and Exposures (CVEs), even after deployment.

  • This ensured early detection and aligned with the "fail fast" DevSecOps principle.

3. Supply Chain Security with SBOM

  • I generated a Software Bill of Materials (SBOM) to list all components in my container image.

  • The SBOM enabled compliance, audit readiness, and proactive CVE tracking.

4. Docker Scout Workflow in CI/CD

  • Quick Overview: docker-scout quickview for image health.

  • Vulnerability Analysis: docker-scout cves for detailed CVE reports.

  • Remediation Advice: docker-scout recommendations for actionable fixes.

5. Post-Deploy Security (DAST)

  • After deploying containers, I performed runtime security analysis using Trivy and Docker Scout.

  • This helped catch misconfigurations and exposed APIs during actual execution.

6. Shift Left & Developer Remediation

  • By introducing security early in the lifecycle, I reduced risk and saved cost.

  • Docker Scout provided specific fixes like version bumps (e.g., Library A v1.2.3 → v1.3.0), which I verified by re-running the pipeline.

🎯 Key Outcome

Security was enforced at every stage—from code to runtime—without slowing development. Docker Scout empowered me to embed secure practices directly into my CI/CD workflow while maintaining supply chain integrity.

This hands-on implementation taught me how modern DevSecOps blends automation with actionable security to build production-ready, secure apps.

📌 Conclusion

This end-to-end DevSecOps pipeline for a Prime Video clone project is a complete implementation of modern, secure, and observable software delivery. It showcases best practices in CI/CD, security automation, and infrastructure observability.

0
Subscribe to my newsletter

Read articles from Subroto Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Subroto Sharma
Subroto Sharma

I'm a passionate and results-driven DevOps Engineer with hands-on experience in automating infrastructure, optimizing CI/CD pipelines, and enhancing software delivery through modern DevOps and DevSecOps practices. My expertise lies in bridging the gap between development and operations to streamline workflows, increase deployment velocity, and ensure application security at every stage of the software lifecycle. I specialize in containerization with Docker and Kubernetes, infrastructure-as-code using Terraform, and managing scalable cloud environments—primarily on AWS. I’ve worked extensively with tools like Jenkins, GitHub Actions, SonarQube, Trivy, and various monitoring/logging stacks to build secure, efficient, and resilient systems. Driven by automation and a continuous improvement mindset, I aim to deliver value faster and more reliably by integrating cutting-edge tools and practices into development pipelines.