Devops Mega project: How to Create a Safe and Scalable Kubernetes Infrastructure with DevSecOps, GitOps, and 2-Tier Monitoring

Table of contents
- Introduction
- Architecture Overview
- Prerequisites
- Step-by-Step Implementation
- → Step 1: Setting up IAM user
- → Step 2: Set Up Infrastructure with Terraform
- → Step 3: Docker Installation and Docker image creation
- → Step 4: Install AWS CLI
- → Step 5: Install Kubectl
- → Step 6: Install eksctl
- → Step 7: EKS cluster creation
- → Step 8: Creating IAM OpenID connect provider
- → Step 9: Worker nodes creation
- → Step 10: Install and configure Jenkins (Master machine)
- → Step 11: Setting up Jenkins for Email Notification:
- → Step 12: Integrating Jenkins with Github:
- → Step 13: Install and configure SonarQube (Master machine)
- → Step 14: Integrating Jenkins with SonarQube by adding Webhook
- → Step 15: Install Trivy
- → Step 16: Install and Configure ArgoCD (Master Machine)
- → Step 17: Helm Installation
- → Step 18: Install Prometheus
- Step 20: Grafana Dashboards
- Conclusion

Introduction
In the modern era of cloud-native applications, DevOps plays a crucial role in automating, securing, and deploying software efficiently. This project demonstrates a Kubernetes-based DevOps pipeline with infrastructure as code (IaC), GitOps, DevSecOps, and monitoring to ensure high availability, security, and scalability.
Architecture Overview
Our project consists of several key components:
1. Application Stack
Technologies used: React.js, Node.js, MongoDB
Containerized using Docker
2. Infrastructure Provisioning (Master Machine)
Terraform is used for infrastructure automation.
AWS security is managed using IAM roles, Key Pairs, and Security Groups.
A Master Node controls the cluster deployment.
3. Kubernetes Cluster (AWS EKS)
The application runs on Amazon Elastic Kubernetes Service (EKS).
Worker nodes (Node 1, Node 2) handle application workloads.
4. DevSecOps (Security & Compliance)
Trivy for container vulnerability scanning.
OWASP for application security assessment.
SonarQube for code quality analysis.
Docker for container management.
Mail notifications for security alerts.
5. GitOps (CI/CD Automation)
Git Repository (GitHub/GitLab/Bitbucket) for version control.
ArgoCD for declarative continuous deployment to Kubernetes.
6. Monitoring Stack
Prometheus for metrics collection.
Grafana for visualization and alerting.
Prerequisites
Before setting up this project, ensure you have the following:
1. Cloud Infrastructure (AWS)
AWS account with permissions for EKS, EC2, IAM, and S3.
AWS CLI installed and configured.
2. Infrastructure as Code (Terraform)
Install Terraform to automate AWS infrastructure provisioning.
Basic knowledge of HCL (HashiCorp Configuration Language).
3. Kubernetes Tools
kubectl (Kubernetes CLI) for managing the cluster.
eksctl for EKS cluster setup.
Helm for package management in Kubernetes.
4. DevSecOps & CI/CD Tools
Jenkins/GitHub Actions for CI/CD pipeline.
ArgoCD for GitOps-based deployments.
Trivy, OWASP, and SonarQube for security and code analysis.
5. Docker & Containers
Docker installed for building and managing container images.
Docker Hub/ECR (Elastic Container Registry) for storing images.
6. Monitoring Setup
- Prometheus & Grafana installed and configured to monitor system health.
Step-by-Step Implementation
→ Step 1: Setting up IAM user
We need to Create a IAM user in AWS, I named it as Mega-project-user and click on next.
Now, attach the policies directly for the user, Give Administrator Access as below and click on next and create user.
and user is created successfully. Now create access keys
Configure AWS CLI:
$ aws configure
Provide:
AWS Access Key ID:
AKIAxxxxxxxxxxxxxxx
AWS Secret Access Key:
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Default region: Example:
eu-west-1
Output format: (Press
Enter
for default:json
)Verify the configuration using:
aws sts get-caller-identity
If you see an account ID, you're good to go!
Now Clone your Project using git clone command,
$ git clone https://github.com/NikithaJain-git/Springboot-BankApp.git
→ Step 2: Set Up Infrastructure with Terraform
create terraform.tf
file
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.94.0"
}
}
}
provider "aws" {
region = var.aws_region
}
create variable.tf
file
variable "aws_region" {
default = "eu-west-1" # Change to your desired region
}
variable "instance_type" {
default = "t2.large"
}
variable "ami_id" {
default = "ami-0df368112825f8d8f" # Use a valid AMI for your region
}
create ec2.tf
file
resource "aws_key_pair" "deployer" {
key_name = "bankapp-automate-key"
public_key = file("bankapp-automate-key.pub")
}
resource "aws_default_vpc" "default" {
}
resource "aws_security_group" "allow_user_to_connect" {
name = "allow TLS"
description = "Allow user to connect"
vpc_id = aws_default_vpc.default.id
ingress {
description = "port 22 allow ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = " allow all outgoing traffic "
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "port 80 allow http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "port 443 allow https"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "bankapp-mysecurity"
}
}
resource "aws_instance" "testinstance" {
ami = var.ami_id
instance_type = var.instance_type
key_name = aws_key_pair.deployer.key_name
security_groups = [aws_security_group.allow_user_to_connect.name]
tags = {
Name = "Bankapp-server"
}
root_block_device {
volume_size = 30
volume_type = "gp3"
}
}
create outputs.tf
file
output "public_ip" {
value = aws_instance.testinstance.public_ip
}
After resource file creation, use Terraform Commands to create infrastructure,
terraform init
This will: ✅ Download necessary Terraform providers. ✅ Set up the working directory.
terraform plan
This will show what Terraform will create.
terraform apply
This will: ✅ Create an EC2 instance in AWS.
Your EC2 instance is successfully deployed using Terraform!
Connect your EC2 Instance with SSH client
ssh -i bankapp-automate-key ubuntu@public Ip address
→ Step 3: Docker Installation and Docker image creation
Use the below commands to install docker on your EC2 instance,
sudo apt-get update
sudo apt-get install docker.io
sudo usermod -aG docker $USER
newgrp docker
docker --version
once the installation is completed, write the Dockerfile for the application
----------Stage-1 -----------
FROM maven:3.8.3-openjdk-17 as builder
WORKDIR /src
COPY . /src
RUN mvm clean install -DskipTests=true
#---------Stage-2 -----------
FROM openjdk:17-alpine
COPY --from=builder /src/target/*.jar /src/target/bankapp.jar
EXPOSE 8080
CMD ["java","-jar", "/src/target/bankapp.jar"]
build the file into the docker image
docker build -t nikithajain/bankapp:latest .
docker images
docker image tag nikithajain/bankapp:latest nikithajain/bankapp-eks:v1
docker push nikithajain/bankapp-eks:v1
→ Step 4: Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
aws configure
→ Step 5: Install Kubectl
Below are the commands to install kubectl,
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version
→ Step 6: Install eksctl
create vim install_eksctl.sh
# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
sudo mv /tmp/eksctl /usr/local/bin
→ Step 7: EKS cluster creation
Create EKS cluster from the below command,
eksctl create cluster --name=bankapp-cluster --region=eu-west-1 --version=1.31 --without-nodegroup
After creating the Docker image through Dockerfile and Kubernetes manifests we deploy it through EKS cluster which is managed by AWS entirely, it only manages the Master node and we have to manage the worker nodes.
- By using cloud formation templates, we can create stacks.
Note:
If we are going to make a Serverless Infrastructure then we can consider AWS lambda
but If we want to create an infrastructure on EKS then we can create it through eksctl
→ Step 8: Creating IAM OpenID connect provider
This command is used in Amazon Elastic Kubernetes Service (EKS) to associate an IAM OpenID Connect (OIDC) provider with the specified Kubernetes cluster. This step is necessary for enabling IAM roles for Kubernetes Service Accounts (IRSA), allowing workloads running on the cluster to securely interact with AWS services.
The command:
eksctl utils associate-iam-oidc-provider --region eu-west-1 --cluster=bankapp-cluster --approve
→ Step 9: Worker nodes creation
eksctl create nodegroup --cluster=bankapp-cluster --region=eu-west-1 --name=bankapp-ng --node-type=t2.medium --nodes=2 --nodes-min=2 --nodes-max=2 --node-volume-size=15 --ssh-access --ssh-publi
c-key=bankapp-automate-key
Here in this command we are create worker nodes, we have taken cluster and its name, its region, worker node type=t2.medium, taken 2 nodes, min and max nodes=2, volume size for the instance, ssh access and added public key in the above command to create the 2 worker nodes.
Here is the EKS cluster we created.
Here is the Worker Nodes, we created
Two nodes and a Master node that we have created so far and connected using the IAM OICD provider.
→ Step 10: Install and configure Jenkins (Master machine)
#Java installation
sudo apt update -y
sudo apt install fontconfig openjdk-17-jre -y
#Jenkins installation
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y
jenkins --version
systemctl status jenkins
sudo vim /usr/lib/systemd/system/jenkins.service #you can change your port from 8080 to 8081 here
sudo systemctl daemon-reload
sudo systemctl restart jenkins
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
newgrp docker
We have installed jenkins on the Master instance, and we can use localhost:8081 to access Jenkins since 8080 is the port which we access the bankapp application through.
Now, Create a Job → Bankapp-CI (New item) → pipeline → ok
Add your Docker Hub Credentials in the Manage Jenkins → Credentials → Add credentials as below
pipeline{
agent any;
parameters{
string(name: 'IMAGE_VERSION', defaultValue:'latest', description: "Image tag")
}
stages{
stage("code clone"){
steps{
git url:"https://github.com/NikithaJain-git/Springboot-BankApp.git", branch: "DevOps"
}
}
stage("Build"){
steps{
sh "docker build -t bankapp-eks:${params.IMAGE_VERSION} ."
}
}
stage ("Docker push") {
steps {withCredentials ([usernamePassword(credentialsId:"dockerHub",
usernameVariable:"dockerHubuser",
passwordVariable:"dockerHubpass"
)]){
sh "docker image tag bankapp-eks:${params.IMAGE_VERSION} ${dockerHubuser}/bankapp-eks:${params.IMAGE_VERSION}"
sh "docker login -u ${dockerHubuser} -p ${dockerHubpass}"
sh "docker push ${dockerHubuser}/bankapp-eks:${params.IMAGE_VERSION}"
}
}
}
}
}
→ Step 11: Setting up Jenkins for Email Notification:
Add port 465 to the Ec2 instance for SMTPS.
Now go to your gmail, open manage your google account at your profile → Security (Verify if your 2 -step verification is on).
Search for app → use App passwords → verify and add App name as Jenkins and create then you will get a password which we will configure it in the Jenkins.
Go to Jenkins → Manage Jenkins → Credentials → Add credentials → New credentials
- Go to Manage Jenkins → system and add the below configuration for email notification
We can also write the Test configuration for this email notification
→ Step 12: Integrating Jenkins with Github:
Go to manage Jenkins → Credentials → Add New credentials → add PAT (Personal Access token) password generated from github, configure as below and create it.
→ Step 13: Install and configure SonarQube (Master machine)
docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community
- Add Security group, port-9000 to the Master Instance as below
- Now access through the public Ip address:9000
Go to the Jenkins server, Go to Manage Jenkins → plugins → Available plugins → Search for OWASP and install OWASP Dependency Check.
Now go to Manage Jenkins → tools
We are connecting the Jenkins with SonarQube, so we create token here.
Go to Administrator → Security → User → Create token
Since we have created the token in SonarQube, Now lets integrate it with the Jenkins as below,
Go to Manage Jenkins → credentials → Add credentials → New credentials fill the form as below,
Now go to Manage Jenkins and Install plugins for SonarQube Scanner and install it and restart the Jenkins.
Go to Manage Jenkins → system → Search for sonar → Add the SonarQube Installations as below and save it.
Go to manage Jenkins → Tools and Search for SonarQube Scanner Installations,
→ Step 14: Integrating Jenkins with SonarQube by adding Webhook
Login to SonarQube Server, go to Administration → then to configuration add Webhook and create new webhook
pipeline { agent any environment { SONAR_HOME = tool "Sonar" } parameters { string(name: 'IMAGE_VERSION', defaultValue: 'latest', description: "Image tag") } stages { stage("Cleaning Workspace") { steps { cleanWs() } } stage("Code Clone") { steps { git url: "https://github.com/NikithaJain-git/Springboot-BankApp.git", branch: "DevOps" } } stage("Trivy File System Scan") { steps { sh "trivy fs ." } } stage("OWASP Dependency-Check") { steps { dependencyCheck additionalArguments: '--scan ./ --format XML', odcInstallation: 'OWASP' dependencyCheckPublisher pattern: '**/dependency-check-report.xml' } } stage("SonarQube Code Quality Analysis") { steps { withSonarQubeEnv("Sonar") { sh ''' ${SONAR_HOME}/bin/sonar-scanner \ -Dsonar.projectKey=bankapp \ -Dsonar.projectName=bankapp \ -Dsonar.exclusions=**/*.java ''' } } } stage("SonarQube Quality Gate") { steps { timeout(time: 2, unit: 'MINUTES') { waitForQualityGate abortPipeline: true } } } stage("Build Docker Image") { steps { sh "docker build -t bankapp-eks:${params.IMAGE_VERSION} ." } } stage("Docker Push") { steps { withCredentials([usernamePassword( credentialsId: "dockerHub", usernameVariable: "dockerHubuser", passwordVariable: "dockerHubpass" )]) { sh "docker image tag bankapp-eks:${params.IMAGE_VERSION} ${dockerHubuser}/bankapp-eks:${params.IMAGE_VERSION}" sh "docker login -u ${dockerHubuser} -p ${dockerHubpass}" sh "docker push ${dockerHubuser}/bankapp-eks:${params.IMAGE_VERSION}" } } } } post { success { script { emailext( from: 'nikithajain56789@gmail.com', to: 'nikithajain56789@gmail.com', subject: 'Build Success for Bankapp CICD', body: 'Build Success for Bankapp CICD' ) } } } }
Here as you can see the Jenkins pipeline with Devsecops is Deployed.
We have received the Email Notification when the build was successful.
SonarQube:
→ Step 15: Install Trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update -y
sudo apt-get install trivy -y
trivy version
→ Step 16: Install and Configure ArgoCD (Master Machine)
- Create argocd namespace
kubectl create namespace argocd
- Apply argocd manifest
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Make sure all pods are running in argocd namespace
kubectl get pods -n argocd
- Install argocd CLI
sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
- Provide executable permission
sudo chmod +x /usr/local/bin/argocd
argocd version
- Check argocd services
kubectl get svc -n argocd
- Change argocd server's service from ClusterIP to NodePort to expose the port
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
- Confirm service is patched or not
kubectl get svc -n argocd
Check the port where ArgoCD server is running and expose it on security groups of a worker node
Access it on browser, click on advance and proceed with
<public-ip-worker>:<port
- Fetch the initial password of argocd server
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Username: admin
Now, go to User Info and update your argocd password
If you want to login into argocd through terminal and should be successfully logged in with the command below.
argocd login 34.245.122.45:30214 --username admin
argocd cluster list
Do, argocd cluster list to view the list in the terminal just like below,
- Get your cluster name
kubectl config get-contexts
- Add your cluster to argocd
argocd cluster add Mega-project-user@bankapp-cluster.eu-west-1.eksctl.io --name bankapp-cluster
- Once your cluster is added to argocd, go to argocd console Settings --> Clusters and verify it.
Go to Settings --> Repositories and click on Connect repo
Note: The connection should be successful.
- Go to Applications and click on New App
Checking if all the pods, services and all the kubernetes resources are working fine.
As you can see the application is successfully deployed now, congratulations and celebrations!
Patching the NodePort service to expose
kubectl patch svc bankapp-service -n bankapp-namespace -p '{"spec": {"type": "NodePort"}}'
Go to Security group and open the Nodeport range ports to access the application as it has been deployed with Argocd
Accessed application using http://localport:NodePort
Monitoring through prometheus and Grafana using Helm:
→ Step 17: Helm Installation
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh helm version #To install ingress-controller with helm helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
→ Step 18: Install Prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts kubectl create namespace prometheus kubectl get ns helm install stable prometheus-community/kube-prometheus-stack -n prometheus kubectl get pods -n prometheus kubectl get svc -n prometheus kubectl patch svc stable-grafana -n prometheus -p '{"spec": {"type":"NodePort"}}'
Access the Grafana with your public Ip address:Nodeport → Ip Address:31667 and Here in connections→ Data sources the prometheus is added as Default. You can go to Dashboards to visualize.
Step 19: Access Grafana now through the Nodeport
Step 20: Grafana Dashboards
Also Imported the template from Grafana Dashboards to view the Kubernetes
Docker monitoring
Clean Up
- Delete eks cluster
eksctl delete cluster --name bankapp-cluster --region=eu-west-1
Conclusion
This DevOps project provides a complete end-to-end automation framework with Kubernetes, GitOps, security, and monitoring. It ensures scalability, security, and efficiency for modern cloud applications.
Would you like to expand on any section or include a tutorial video? Let me know in the comments! 🚀
Subscribe to my newsletter
Read articles from Nikitha Jain directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
