Scaling DevSecOps with Kubernetes: Zero Downtime, Auto Healing, and More


Take your CI/CD pipeline to the next level with Kubernetes-powered production deployments, built for scale, resilience, and real-world traffic.
Missed the full DevSecOps journey?
👉 Start here with the full 6-phase blog
👉 Or check the Docker-only version first
Introduction:
In this third phase of our DevSecOps journey, we take things to the next level with Kubernetes. While Docker helped us containerize and deploy our app, real-world production demands more:
⏱️ Zero Downtime during updates
🔁 Rollbacks when things go wrong
📈 Auto Scaling to handle load spikes
❤️ Self-Healing containers that recover automatically
Kubernetes provides all of that — and more.
In this blog, we’ll move from Docker-only deployment to a robust Kubernetes setup, covering:
✅ Kubernetes Basics (Clusters, Pods, Services)
✅ Deployment Strategies
✅ EKS Setup (AWS Managed K8s)
✅ Writing YAMLs (Deployment, Services, ConfigMaps)
✅ Final Production Push
Step 1: Launch an EC2 Instance for Kubernetes Setup
We’ll start by launching a new EC2 instance where we’ll install and configure our Kubernetes environment.
AMI: Amazon Linux 2 (Kernel 5.10)
Instance Type:
t2.large
Storage: 25 GB EBS Volume
Key Pair: Use your existing key pair (or create a new one)
IAM Role: Attach the same IAM role used in your Docker-based setup
(or create a new IAM role with EC2, S3, EKS full access if needed)
💡 Tip: Ensure port 22 (SSH) is open in your security group for accessing the instance.
Step 2: Install the Complete DevSecOps Tech Stack
Install Git
yum install git -y
Install Jenkins
# Add Jenkins repo
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
# Install Java 17 & Jenkins
yum install java-17-amazon-corretto -y
yum install jenkins -y
# Start and enable Jenkins
systemctl start jenkins
systemctl enable jenkins
systemctl status jenkins
Install Docker
yum install docker -y
systemctl start docker
systemctl enable docker
systemctl status docker
chmod 777 /var/run/docker.sock
Install Terraform or Install Terraform
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum install terraform -y
Install SonarQube (Using Docker)
docker run -itd --name sonar -p 9000:9000 sonarqube:lts-community
Install Trivy (Image Vulnerability Scanner)
# Update bashrc to include /usr/local/bin
vim ~/.bashrc
export PATH=$PATH:/usr/local/bin/
source ~/.bashrc
# Download and install Trivy
wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.tar.gz
tar zxvf trivy_0.18.3_Linux-64bit.tar.gz
sudo mv trivy /usr/local/bin/
Verify Installations
git --version
jenkins --version
docker version
terraform version
trivy --version
docker ps
Install AWS CLI (v2) or follow Installing or updating to the latest version of the AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Install kubectl
(Kubernetes CLI) or follow Install and Set Up kubectl on Linux
curl -LO "https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
Install eksctl
(for creating EKS clusters) or follow For Unix To download the latest release,
curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" -o eksctl.tar.gz
tar -zxvf eksctl.tar.gz
sudo mv eksctl /usr/local/bin/
eksctl version
aws
– CLI for accessing your AWS accountkubectl
– To interact with your Kubernetes clusterseksctl
– Simplifies EKS cluster creation and management
Navigate to the Terraform Directory
git clone https://github.com/PasupuletiBhavya/devsecops-project.git
git checkout master
cd k8s-project/eks-terraform
Inside this folder, we have .tf
files to define:
The AWS region & provider
The VPC, subnets, and EKS cluster
IAM roles and node groups
How Terraform Manages Infrastructure Internally?
When we use Terraform to create resources (like AMI, instance type, EBS, IPs), it stores all the details in a state file (terraform.tfstate
).
What Terraform Stores:
✅ AMI IDs✅ Instance Types✅ EBS Volume Details✅ Key Pair names✅ Public & Private IPs✅ DNS Info✅ Subnet and VPC IDs✅ IAM roles✅ Security Groups
Why is the State File Important?
The .tfstate
file acts like a source of truth for Terraform.
It keeps track of what has already been created and prevents duplication or drift.
Without it:
Terraform won’t know what already exists
You risk duplicating resources or breaking infrastructure
Why Store it in S3?
Instead of keeping it locally, we store the state file in an S3 bucket so:
It’s safe and backed up
Team members can share it
It avoids duplication or conflicts
Step-3: Update your backend.tf
, provider.tf
, and main.tf
files as shown below to configure Terraform with S3 backend and AWS provider before creating the EKS cluster
After updating your backend.tf
, provider.tf
, and main.tf
files:
#Initializes Terraform, downloads providers, and configures the S3 backend.
terraform init
# Shows a preview of what resources will be created, updated, or destroyed.
terraform plan
#Provisions the entire EKS infrastructure without manual confirmation
terraform apply --auto-approve
Step 4: Connect to EKS Cluster
After creating the EKS cluster with Terraform, follow these steps:
#Check if cluster exists:
eksctl get cluster --region us-east-1
You should see your cluster listed. If EKSCTL CREATED
is False
, that means it was created via Terraform (as expected!).
#Connect kubectl to EKS:
aws eks update-kubeconfig --region us-east-1 --name EKS_CLOUD
This sets up your kubeconfig so
kubectl
can talk to the cluster.
#Verify nodes
kubectl get nodes
✅ Output should show your worker node(s) in Ready state.
Our Cluster and server is Created!!
So, Why Did You See a Server Created When You Made a Cluster?
That was a worker node EC2 that EKS created for you using a Node Group. That EC2:
Belongs to your cluster
Hosts your app containers (inside Kubernetes pods)
Is not your Jenkins server
Is not the control plane
💡**Understanding Our Setup: Ops Server vs Pre-Prod Cluster:
Ops Server (Jenkins EC2 Instance):
This acts as our control center — where Jenkins is installed and all our DevSecOps pipelines are written and triggered.
It automates everything: code checkout, scans, image builds, testing, deployments, and even Slack notifications.
Pre-Prod Cluster (EKS_CLOUD):
This is our Kubernetes cluster provisioned via Terraform & eksctl
.
It serves as the final staging and production environment.
👉 After testing in Dev & QA, the same Docker image is deployed ,first in staging, then in production.
Dev = Developer testing
UAT = Client-side testing (QA)
Staging = Pre-prod environment
Prod = Final deployment
Environment-wise CI/CD Pipeline Structure
To maintain control, flexibility, and rollback options, we create dedicated pipelines for each environment:
🔹 UAT Environment (Internal Testing by QA or Clients)
1 Build Pipeline: Builds and tags Docker image (UAT-specific version)
1 Deploy Pipeline: Deploys the UAT image to the UAT namespace in EKS
🔹 Staging Environment (Final Testing Before Production)
1 Build Pipeline: Builds and pushes the staging Docker image
1 Deploy Pipeline: Deploys the same image to the staging namespace
🔹 Production Environment (Live Users)
- 1 Deploy Pipeline only:
No rebuild is done — the same image from staging is reused to ensure stability and traceability
Now access your Jenkins dashboard with eks-server IP address and port number 8080:
Install all necessary plugins ,Configure Jenkins Credentials and Tools
You can refer to 👉 my previous blogs for configuration setup
Step-5 : Lets write our pipeline
Create a new JOB(UAT_Build_Deployement)→ Pipeline → start writing our pipeline
This Jenkins pipeline handles the complete build process for the UAT environment.
pipeline {
agent any
tools {
nodejs 'node16'
}
environment {
SCANNER_HOME = tool 'mysonar'
}
stages {
stage('CODE') {
steps {
git "https://github.com/PasupuletiBhavya/devsecops-project.git"
}
}
stage('CQA') {
steps {
withSonarQubeEnv('mysonar') {
sh '''
$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=camp \
-Dsonar.projectKey=camp
'''
}
}
}
stage('QualityGates') {
steps {
waitForQualityGate abortPipeline: false, credentialsId: 'sonar'
}
}
stage('NPM Test') {
steps {
sh 'npm install'
}
}
stage('Docker Build') {
steps {
sh 'docker build -t bhavyap007/finalround:UAT-v1 .'
}
}
stage('Trivy Scan') {
steps {
sh 'trivy image bhavyap007/finalround:UAT-v1'
}
}
stage('Push Image') {
steps {
script {
withDockerRegistry(credentialsId: 'dockerhub') {
sh 'docker push bhavyap007/finalround:UAT-v1'
}
}
}
}
}
post {
always {
echo 'Slack Notifications'
slackSend (
channel: 'all-camp',
message: "*${currentBuild.currentResult}:* Job ${env.JOB_NAME} \nBuild: ${env.BUILD_NUMBER} \nDetails: ${env.BUILD_URL}"
)
}
}
}
BUILD THIS PIPELINE→After successful build open SonarQube to check for bugs or vulnerabilities
Step-6: Integrating Jenkins with EKS Pre-Prod (UAT Namespace)
1. Service Account
We first create a service account named jenkins
in the uat
namespace. This service account will be used by Jenkins to authenticate with the Kubernetes API.
2. Role
We then create a Role
which defines what actions are allowed inside the namespace. For example:
View pods, services, and deployments
Create or delete resources
Watch and update existing configurations
3. RoleBinding
The RoleBinding
connects the jenkins service account with the defined role, granting the necessary access.
To deploy into the EKS_CLOUD (Pre-Prod) cluster from Jenkins, we must first set up access and permissions using a ServiceAccount in the Kubernetes cluster.
Create a Directory for Manifests
mkdir manifests
cd manifests
Create a service-account.yml
file
# service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: uat
Apply the Manifest
kubectl create namespace uat
kubectl apply -f service-account.yml
#To verify that your Jenkins ServiceAccount was created successfully in the uat namespace, run:
kubectl get serviceaccount -n uat
This creates a ServiceAccount named jenkins
inside the uat
namespace of your EKS cluster.
Create role.yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: uat
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
- secrets
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Create a rolebinding.yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: uat
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-role
subjects:
- kind: ServiceAccount
name: jenkins
namespace: uat
Run:
kubectl create -f role.yml
kubectl create -f rolebinding.yml
Creating a Token for Jenkins ServiceAccount
To allow Jenkins to authenticate and access the Kubernetes cluster, follow these steps to generate and retrieve the token:
Create a secret YAML file called secret.yml
:
tapiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: mysecretname
annotations:
kubernetes.io/service-account.name: jenkins
Note: Make sure the
jenkins
name matches your service account name.
Apply the Secret in the uat
Namespace
kubectl apply -f secret.yml -n uat
⚠️ Since the namespace is not mentioned inside the YAML, you must pass it using
-n uat
.
Retrieve the Token
#Once created, run:
kubectl describe secret mysecretname -n uat
In the output, you’ll see a field called token:
— copy that value and store it securely.
Where to Use It? You’ll use this token in Jenkins (via Kubernetes plugin) to authenticate and deploy workloads to the uat
namespace.
Open Jenkins → Go to your pipeline job → Click Pipeline Syntax (bottom of the config page).
In the dropdown, select:
🛠️Kubernetes CLI Plugin
→Configure Kubernetes CLI
Add Credentials:
Click on “Add” → Select “Jenkins”
Choose “Secret text”
Paste the token you copied from
kubectl describe secret mysecretname -n uat
Set an ID like
k8-token
Click Add
Create new item → UAT_DEPLOY_PIPELINE
(Kubernetes API endpoint you can find from your cluster dashboard)
pipeline {
agent any
environment {
NAMESPACE = "uat" // Make sure to define the namespace if not passed as a parameter
}
stages {
stage('Checkout Code') {
steps {
git 'https://github.com/PasupuletiBhavya/devsecops-project.git'
}
}
stage('Deploy to Kubernetes') {
steps {
withKubeCredentials(kubectlCredentials: [[
caCertificate: '',
clusterName: 'EKS_CLOUD',
contextName: 'myapp',
credentialsId: 'k8s-token',
namespace: "${NAMESPACE}",
serverUrl: 'https://5D345FCC81F067526748BA123E5956EF.gr7.us-east-1.eks.amazonaws.com'
]]) {
sh "kubectl apply -f Manifests -n ${NAMESPACE}"
}
}
}
stage('Verify Deployment') {
steps {
withKubeCredentials(kubectlCredentials: [[
caCertificate: '',
clusterName: 'EKS_CLOUD',
contextName: 'myapp',
credentialsId: 'k8s-token',
namespace: "${NAMESPACE}",
serverUrl: 'https://5D345FCC81F067526748BA123E5956EF.gr7.us-east-1.eks.amazonaws.com'
]]) {
sh "kubectl get all -n ${NAMESPACE}"
sh 'sleep 30'
}
}
}
}
post {
always {
echo 'Sending Slack Notification...'
slackSend (
channel: 'all-camp',
message: "*${currentBuild.currentResult}:* Job `${env.JOB_NAME}`\nBuild `${env.BUILD_NUMBER}`\nMore info: ${env.BUILD_URL}"
)
}
}
}
BUILD PIPELINE AND ACCESS THE APPLICATION
SO what we did so far ↓
Jenkins EC2 does | Worker Node EC2 does |
Runs pipeline stages (build, scan, push) | Actually runs your application pods |
Talks to Kubernetes via | Receives pods from Kubernetes scheduler |
Needs Kubernetes token to access the cluster | Doesn’t need to know anything about Jenkins |
Just follow the same UAT process inside the staging
workspace. Only change: Update namespace from uat
to staging
wherever used.
Create a new JOBs ↓
The Build Pipeline (Stage_Build_Pipeline
) for staging is same as UAT — only the Docker image tag is updated to:
bhavyap007/finalround:staging-v1
The Deploy Pipeline (Stage_Deploy_Pipeline
) for staging is also identical — just update:
namespace: "staging"
credentialsId: "k8s-staging-token"
All manifests will now apply in the staging
namespace of EKS_CLOUD
.
Make sure to update image in the dss.yml file in GitHub
BUILD the pipelines
Everything is updates in our slack
Step-7 : Now that staging is done, we create a new Kubernetes cluster just for production.
create a new workspace prod
We switched to the prod
workspace and applied the Terraform config to launch our dedicated production EKS cluster
#We then update the kubeconfig:
aws eks update-kubeconfig --region us-east-1 --name EKS_CLOUD_PROD
#The cluster is successfully connected — confirmed with:
kubectl get nodes
create new namespace prod→ follow same steps as pre-prod
Go to Jenkins > New Item
Name it:
prod-deploy-pipeline
In the “Copy from” field, enter:
Stage_Deploy_pipeline
Now Just Update the Following:
Namespace →
"prod"
credentialsId → Your production token ID (e.g.,
k8s-prod-token
)serverUrl → Your
EKS_CLOUD_PROD
cluster endpoint URL
Access application
Step-8: Install Argo CD Using Helm
Why Argo CD?
We use Argo CD to simplify and automate Kubernetes deployments using the GitOps model.
Instead of manually pushing changes with kubectl
, Argo CD:
Watches the Git repo for changes
Automatically pulls and syncs updates to the cluster
Gives a dashboard to monitor app status, rollout progress, and rollbacks
✅ More visibility
✅ Better control
✅ Production-grade delivery with Git as the source of truth
Install Helm 3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version # Confirm installation
Add Argo CD Helm Repository
helm repo add argo-cd https://argoproj.github.io/argo-helm
helm repo update
Create Argo CD Namespace
kubectl create namespace argocd
Install Argo CD Using Helm
helm install argocd argo-cd/argo-cd -n argocd
Check Argo CD Resources
kubectl get all -n argocd
Accessing Argo CD in the Browser
By default, Argo CD is not exposed to the internet. To access its dashboard, we need to expose it externally using a LoadBalancer type service.
Convert ClusterIP to LoadBalancer
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
🔹 This changes the Argo CD service type from ClusterIP
(internal) to LoadBalancer
(public), so you can access it from your browser.
Install jq
(JSON parser)
yum install jq -y
🔹 We use jq
to extract the external hostname from the Argo CD service response easily.
Get the External Load Balancer URL
kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname
🔹 This command fetches the public URL (hostname) created by AWS ELB for your Argo CD service.
Getting the Argo CD Admin Password
When Argo CD is installed, it automatically creates a default admin account. The password is stored in a Kubernetes secret named argocd-initial-admin-secret
.
# Export the admin password to a variable
export ARGO_PWD='kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d'
🔹 This command fetches the base64-encoded password from the Kubernetes secret and decodes it.
🔹 It saves the command as a string to the ARGO_PWD
variable — useful for reference.
⚠️ Note: This doesn't run the command. It just stores the text string.
Run the actual command to get the password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
🔹 This will display the actual Argo CD admin password in your terminal.
Argo CD gives you a visual tree view of your Kubernetes resources:
Root node: the application (
camo-app
)Followed by:
yelp-camp-service
(Service)yelp-camp-deployment
(Deployment)ReplicaSet
2 Pods — both running & healthy
If i make changes in my Git code suppose I change the number of replicas set to 4 agroCD will automatically pull the changes and sync it
I can also update the image and deploy a different application
You can see the application is changed from Camp to ZOMATO
You can also rollback to previous camp version
Step-9: Now Last part is to delete resources to avoid billing
# Do both for pre-prod and prod clusters
terraform destroy --auto-approve
aws eks update-kubeconfig --region us-east-1 --name EKS_CLOUD
terrraform workspace select default
terraform destroy --auto-approve
Final Kubernetes Deployment Architecture
We use two separate EKS clusters to isolate staging/testing from actual production:
Pre-Prod Cluster: EKS_CLOUD
Hosts both UAT and Staging environments.
Each environment is separated using Kubernetes namespaces.
We build and test Docker images here.
Once validated in Staging, the same image is promoted to Production.
Production Cluster: EKS_CLOUD_PROD
Dedicated only for Production deployment.
Ensures complete isolation from testing activities.
Final deployment is done without rebuilding, directly using the verified image from staging.
This separation ensures:
Stability and security in production.
Easy troubleshooting and rollback in pre-prod without affecting users.
What I Learned
This project gave me hands-on experience in building and deploying a secure, production-grade DevSecOps pipeline. Here's a quick summary:
Application Setup
Built a 3-tier Node.js app (Yelp Camp) with Cloudinary & Mapbox integrations
Used environment variables for secure config
Hosted source code on GitHub
Docker & CI/CD
Containerized the app using a lightweight Alpine image
Scanned Docker images with Trivy
Automated CI/CD using Jenkins pipelines (build, scan, push, deploy)
Infrastructure as Code
Used Terraform to provision EC2 servers & EKS clusters
Stored Terraform state securely in S3
Multi-Stage Pipelines
Dev: Local testing & quick feedback
UAT & Staging: Full build + deploy pipelines with Kubernetes integration
Prod: GitOps-based deployment using Argo CD
Kubernetes & GitOps
Created namespaces, roles, and bindings
Jenkins connected to K8s with ServiceAccounts
Argo CD auto-synced from Git and supported rollback
Monitoring
Slack integrated for build/deploy notifications
Argo CD UI helped track deployment status in real-time
For anyone starting out in DevOps, building a pipeline like this is one of the best ways to gain practical, resume-worthy experience.
If this article helped you in any way, your support would mean a lot to me 💕 — only if it's within your means.
Let’s stay connected on LinkedIn and grow together!
💬 Feel free to comment or connect if you have questions, feedback, or want to collaborate on similar projects.
Subscribe to my newsletter
Read articles from Bhavya Pasupuleti directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
