DevSecOps Mega Project: EKS Cluster & jenkins CI/Cd ArgoCD GitOps

SUMIT DESHMUKHSUMIT DESHMUKH
14 min read

Introduction

  1. Overview of the project :- It is bloagging app wanderlust where you can write you adventurous experiences. So this poroject is heavily based on devSecOps implementations so We will talk about the implementation of the devops project, CI/CD using jenkins , Infrastructure as code and provisioning using Teraform of this porject.

  2. Purpose of implementing DevSecOps :- Purpose of implementation is that so one can understand that how he/she can implement shuch kind of DevSecOps project and also learn something from this project and implement all this technologyies in his/her project.

  3. Key technologies used :-

    1. GitHub :- From Github we used to Fecth the code by developer.

    2. Terraform :- Terraform is used for Infrastructure as code for provisioning.

    3. AWS :- Amazone Web Services

      1. Ec2 :- For Linux server server category is “ t2.large ”.

      2. VPC :- Virtual private cloude.

      3. EKS :- Elatic Kubernetes Service with two worker Node whith the category of “ t2.large “.

      4. Iam :- Identity Access Manager.

    4. Jenkins :- Jenkins for CI and Cd.

    5. Trivy :- For image Scanning and code Scanning.

    6. OWASP :- Open Web Application Security Project for the security of code and debugging

    7. Sonar :- SonarQube is user to analyse the code and check the security of the code and also check the vulnerability.

    8. Helm :- Helm is Package Manager simply we will used form downloding Grafana and Promithius.

    9. Grafana:- Grafans is used to monitore the helth of the app or tool so before any collapse we can handle the situation.

    10. Promithius :- Promithius is user for monitore time series data refers to a sequence of data points collected over time, each with a timestamp, allowing you to track how a metric changes over time, making it particularly useful for monitoring system and performing and indentiy trend and anomaleis in you infrastructure.

  4. Expected outcomes :- Outcome would be the project deployment with security and no vulnerability.

Project Architecture

  1. High-level architecture diagram :-

  2. Components and their role :-

    1. Block no.1 :- It will just have Three tier application with with frontend, backend, Database and Github from where will fetch the code.

    2. Block no.2 :- It is having Terraform for all this infrastructure provisioning and has on linux server of “ t2.large “ category on which Jenkins, Trivy, OWASP, Donar, Docker and Mail will be configured and Iam for security and key Pairs is for connection with the srvers and security Groups are simply ports wher all the application will run.

    3. Block no.3 :- Jnkins which will resposible for CI/CD and trivy for scanning code and OWASP for the project security and code debugging.

    4. Block no.4 :- It is having the EKS clusture with two Nodes.

    5. Block no.5 :- It is for gitOps it means that whenever code is updated in git arocd will deploy it automaticaly on the EKS clusture.

    6. Block no.6 :- it is for Monitoring which well have grafana and promithious.

  3. Flow of CI/CD and GitOps in the project :- There will be two piplines First is for the CI and second is for CD. Fos CI it will have diffrent pipline. which will do all the interation thing which are in the block no.3.

    This is the jenkins file for CI :-

     @Library('Shared') _
     pipeline {
         agent {label 'Node'}
    
         environment{
             SONAR_HOME = tool "Sonar"
         }
    
         parameters {
             string(name: 'FRONTEND_DOCKER_TAG', defaultValue: '', description: 'Setting docker image for latest push')
             string(name: 'BACKEND_DOCKER_TAG', defaultValue: '', description: 'Setting docker image for latest push')
         }
    
         stages {
             stage("Validate Parameters") {
                 steps {
                     script {
                         if (params.FRONTEND_DOCKER_TAG == '' || params.BACKEND_DOCKER_TAG == '') {
                             error("FRONTEND_DOCKER_TAG and BACKEND_DOCKER_TAG must be provided.")
                         }
                     }
                 }
             }
             stage("Workspace cleanup"){
                 steps{
                     script{
                         cleanWs()
                     }
                 }
             }
    
             stage('Git: Code Checkout') {
                 steps {
                     script{
                         code_checkout("https://github.com/Sumit-deshmukh/Wanderlust-Mega-Project.git","main")
                     }
                 }
             }
    
             stage("Trivy: Filesystem scan"){
                 steps{
                     script{
                         trivy_scan()
                     }
                 }
             }
    
             stage("OWASP: Dependency check"){
                 steps{
                     script{
                         owasp_dependency()
                     }
                 }
             }
    
             stage("SonarQube: Code Analysis"){
                 steps{
                     script{
                         sonarqube_analysis("Sonar","wanderlust","wanderlust")
                     }
                 }
             }
    
             stage("SonarQube: Code Quality Gates"){
                 steps{
                     script{
                         sonarqube_code_quality()
                     }
                 }
             }
    
             stage('Exporting environment variables') {
                 parallel{
                     stage("Backend env setup"){
                         steps {
                             script{
                                 dir("Automations"){
                                     sh "bash updatebackendnew.sh"
                                 }
                             }
                         }
                     }
    
                     stage("Frontend env setup"){
                         steps {
                             script{
                                 dir("Automations"){
                                     sh "bash updatefrontendnew.sh"
                                 }
                             }
                         }
                     }
                 }
             }
    
             stage("Docker: Build Images"){
                 steps{
                     script{
                             dir('backend'){
                                 docker_build("wanderlust-backend-beta","${params.BACKEND_DOCKER_TAG}","sumitdeshmukh10008")
                             }
    
                             dir('frontend'){
                                 docker_build("wanderlust-frontend-beta","${params.FRONTEND_DOCKER_TAG}","sumitdeshmukh10008")
                             }
                     }
                 }
             }
    
             stage("Docker: Push to DockerHub"){
                 steps{
                     script{
                         docker_push("wanderlust-backend-beta","${params.BACKEND_DOCKER_TAG}","sumitdeshmukh10008") 
                         docker_push("wanderlust-frontend-beta","${params.FRONTEND_DOCKER_TAG}","sumitdeshmukh10008")
                     }
                 }
             }
         }
         post{
             success{
                 archiveArtifacts artifacts: '*.xml', followSymlinks: false
                 build job: "Wanderlust-CD", parameters: [
                     string(name: 'FRONTEND_DOCKER_TAG', value: "${params.FRONTEND_DOCKER_TAG}"),
                     string(name: 'BACKEND_DOCKER_TAG', value: "${params.BACKEND_DOCKER_TAG}")
                 ]
             }
         }
     }
    

    Now if Docker push is success then if will call the CD the jenkins file for Cd is this :-

    ```basic @Library('Shared') _ pipeline { agent {label 'Node'}

    parameters { string(name: 'FRONTEND_DOCKER_TAG', defaultValue: '', description: 'Frontend Docker tag of the image built by the CI job') string(name: 'BACKEND_DOCKER_TAG', defaultValue: '', description: 'Backend Docker tag of the image built by the CI job') }

    stages { stage("Workspace cleanup"){ steps{ script{ cleanWs() } } }

    stage('Git: Code Checkout') { steps { script{ code_checkout("https://github.com/Sumit-deshmukh/Wanderlust-Mega-Project.git","main") } } }

    stage('Verify: Docker Image Tags') { steps { script{ echo "FRONTEND_DOCKER_TAG: ${params.FRONTEND_DOCKER_TAG}" echo "BACKEND_DOCKER_TAG: ${params.BACKEND_DOCKER_TAG}" } } }

stage("Update: Kubernetes manifests"){ steps{ script{ dir('kubernetes'){ sh """ sed -i -e s/wanderlust-backend-beta.*/wanderlust-backend-beta:${params.BACKEND_DOCKER_TAG}/g backend.yaml """ }

dir('kubernetes'){ sh """ sed -i -e s/wanderlust-frontend-beta.*/wanderlust-frontend-beta:${params.FRONTEND_DOCKER_TAG}/g frontend.yaml """ }

} } }

stage("Git: Code update and push to GitHub"){ steps{ script{ withCredentials([gitUsernamePassword(credentialsId: 'Github-cred', gitToolName: 'Default')]) { sh ''' echo "Checking repository status: " git status

echo "Adding changes to git: " git add .

echo "Commiting changes: " git commit -m "Updated environment variables"

echo "Pushing changes to github: " git push https://github.com/Sumit-deshmukh/Wanderlust-Mega-Project.git main ''' } } } } } post { success { script { emailext attachLog: true, from: 'deshmukhsumit195@gmail.com', subject: "Wanderlust Application has been updated and deployed - '${currentBuild.result}'", body: """

Project: ${env.JOB_NAME}

Build Number: ${env.BUILD_NUMBER}

URL: ${env.BUILD_URL}

""", to: 'deshmukhsumit195@gmail.com', mimeType: 'text/html' } } failure { script { emailext attachLog: true, from: 'deshmukhsumit195@gmail.com', subject: "Wanderlust Application build failed - '${currentBuild.result}'", body: """

Project: ${env.JOB_NAME}

Build Number: ${env.BUILD_NUMBER}

""", to: 'deshmukhsumit195@gmail.com', mimeType: 'text/html' } } } }



## Prerequisites

1. <mark>Software and tools required </mark> :- No Other software tool is required but should have AWS account.

2. <mark>IAM roles and permissions</mark> :- Master Server should have IAM role which have Administration access.

3. <mark>Kubernetes knowledge basics</mark> :- One Should have basic knowledge of kubernetes and shold know the basic commands of kubernetes.


## Infrastructure Setup

1. <mark>Setting up AWS IAM roles and Security Groups</mark> :- For setting up AWS role first you have to search iam in aws search bar then go to roles then setup the role and give administartor accsecc and then go to ec2 instance and then go to action and then security and then connect it to the iam role.

    For security groups you should go to ec2 instance and go in securities and then edit inbound rule.

2. <mark>Setting up </mark> **<mark>Terraform</mark>** <mark> for Infrastructure as Code</mark> :- first you have to configure terraform files and then go into the terraform file which will have provisioning related file and then just write terraform command to initiate the terraform set up Terraform init after that Terraform apply after that it will start setting upt the Insfrastructure.

3. <mark>Creating and configuring the </mark> **<mark>EKS Cluster</mark> :-First you should configure AWS in you instance via aws clli. For provisioning eks clusture first you should downlode kubectl it is for user who can access nodes and see the namespace and it can do watever he wantes it is neseccery to downlode then you have to downlode Eksctl to make Eks Cluster and then you can create cluster using this commands**

    ```bash
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    sudo apt install unzip
    unzip awscliv2.zip
    sudo ./aws/install
    aws configure

    curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin
    kubectl version --short --client

    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    sudo mv /tmp/eksctl /usr/local/bin
    eksctl version

    eksctl create cluster --name=wanderlust \
                        --region=us-east-2 \
                        --version=1.30 \
                        --without-nodegroup

    eksctl utils associate-iam-oidc-provider \
      --region us-east-2 \
      --cluster wanderlust \
      --approve

    eksctl create nodegroup --cluster=wanderlust \
                         --region=us-east-2 \
                         --name=wanderlust \
                         --node-type=t2.large \
                         --nodes=2 \
                         --nodes-min=2 \
                         --nodes-max=2 \
                         --node-volume-size=29 \
                         --ssh-access \
                         --ssh-public-key=eks-nodegroup-key
  1. setup of jenkins, trivy, Sonar, docker :-

     # Jenkins setup 
     sudo apt update -y
     sudo apt install fontconfig openjdk-17-jre -y
    
     sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
       https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
    
     echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
       https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
       /etc/apt/sources.list.d/jenkins.list > /dev/null
    
     sudo apt-get update -y
     sudo apt-get install jenkins -y
    
     # Docker setup 
    
     sudo apt-get install docker.io -y
     sudo usermod -aG docker ubuntu && newgrp docker
    
     # trivy setup
    
     sudo apt-get install wget apt-transport-https gnupg lsb-release -y
     wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
     echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
     sudo apt-get update -y
     sudo apt-get install trivy -y
    
     # Sonarqube will  run in docker container 
    
     docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community
    

    Setup of Monitoering and packaging tool like helm, grafana, promethius

     # installation of Helm
    
     curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
     chmod 700 get_helm.sh
     ./get_helm.sh
    
     # Add Helm Stable Charts for Your Local Client
    
     helm repo add stable https://charts.helm.sh/stable
    
     #Add Prometheus Helm Repository 
    
     helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    
     # Install Prometheus using Helm
    
     helm install stable prometheus-community/kube-prometheus-stack -n prometheus
    
     # Verify prometheus installation 
    
     kubectl get pods -n prometheus
    
     # Expose Prometheus and Grafana to the external world through Node Port
    
     kubectl edit svc stable-kube-prometheus-sta-prometheus -n prometheus
    
     # after the command one file will open so change the cluster ip to nodeport so it will be acceble to the outer world
    
     # Now,let’s change the SVC file of the Grafana and expose it to the outer world
    
     kubectl edit svc stable-grafana -n prometheus 
    
     # same for gragana as well
    

    Set up of argocd

     # create namespace of argocd 
     Create argocd namespace
    
     #Apply argocd manifest 
     kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    
     # argocd cli 
    
     sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
    
     # Provide executable permission 
    
     sudo chmod +x /usr/local/bin/argocd
    
     # Check argocd services
    
     kubectl get svc -n argocd 
    
     # change argocd server's service from ClusterIP to NodePort to access it in outer world on the browser
    
     kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
    
     #Confirm service is patched or not 
     kubectl get svc -n argocd
    

GitOps Workflow with ArgoCD

  1. Introduction to GitOps and ArgoCD :- It is term silmply used in devops for move securable way of delivery and deployment of application. In this code is pushed on gitub by develper as soon as developer push the code it is automaticalcaly build and deployed using jenkins and arocd.

    Argocd is simply tool made for only the kubernetes it will work only kubernetes related projects.

    what it will do is that it will be integrated with the github repo and whenever code is update it will automatically start the CI/CD and dilever the application ASAP.

  2. Connecting ArgoCD to a Git repository :- connecting git repos to arogcd so there wast the perticulat dedicated section from where you will connect it to the you repo.

  3. Connectin Eks cluster to argocd :- First what you have to do is to login in arocd by following command

     argocd login <you argocd http url> --username
    

    then you have to connect you cluster to argo cd by following command before this you should know you clustor cammand.

    
     # command for knowing cluster name 
     kubectl config get-contexts
     # command for add cluster to arogcd 
     argocd cluster add Wanderlust@wanderlust.us-west-1.eksctl.io --name wanderlust-eks-cluster
    
  4. Deploying applications using ArgoCD :- For this you have to go to application tab and give the source from where it will fetch the code means the git repo and then the destination the clustor where it deploy the code.

Security & Compliance

  1. Integrating security tools like Trivy, Snyk, or OWASP , Sonar :- Integratig trivy for scanning and owasp and sonar for all of thing you should install the plugins injenkins and then first set up the credentials and thes setup in tool and system section of jenkins.

  2. Scanning container images for vulnerabilities :- All this images will be Scanned by trivy

Monitoring & Logging and package managment

  1. Helm :- Helm is a package manager for Kubernetes that helps automate the deployment and management of applications using Helm Charts.

    • Simplifies deployment by packaging all Kubernetes manifests into a single file (Helm Chart).

    • Allows version control for Kubernetes applications.

    • Supports templating, making configurations dynamic and reusable.

  • Deploying ArgoCD, Prometheus, and Grafana using Helm charts.

  • Managing complex Kubernetes applications efficiently

  1. Grafana :- Grafana is a dashboard tool that integrates with Prometheus (and other data sources) to provide real-time monitoring with interactive visualizations.

    • Creates custom dashboards to track cluster health, pod performance, and security metrics.

    • Supports alerting when anomalies are detected.

    • Allows multi-source integration (Prometheus, Loki, Elasticsearch, etc.).

Use Case in DevSecOps:

  • Visualizing Prometheus data (CPU, memory, network stats).

  • Monitoring security metrics for a DevSecOps pipeline.

  • Creating real-time dashboards for system observability.

  1. Promithius :- Prometheus is a metrics collection, monitoring, and alerting tool used in Kubernetes. It pulls real-time data from applications and infrastructure to track performance and health.

    • Provides real-time monitoring of Kubernetes clusters.

    • Uses a powerful query language (PromQL) to analyze data.

    • Supports alerting and auto-scaling based on metrics.

    • Use Case in DevSecOps:

    • Monitoring Kubernetes nodes, pods, and services.

    • Collecting CPU, memory, and network usage data.

    • Triggering alerts when thresholds are breached (e.g., high CPU usage).

Conclusion & Future Enhancements

  1. Summary of the project :-The DevSecOps Mega Project is designed to create a secure, scalable, and automated CI/CD pipeline leveraging Kubernetes, ArgoCD (GitOps), Terraform, and AWS. This project integrates security, observability, and automation into the software development lifecycle to ensure a seamless and secure deployment workflow.

    🔹 Key Components:

    1. Infrastructure as Code (IaC) – Using Terraform to provision cloud resources.

    2. Kubernetes Cluster Management – Deploying workloads on Amazon EKS (Elastic Kubernetes Service).

    3. GitOps Deployment – Using ArgoCD to automate and manage Kubernetes deployments from Git.

    4. Security & Compliance – Implementing IAM, Key Pair security, and Security Groups for access control.

    5. Monitoring & Observability – Using Prometheus for real-time metrics and Grafana for visualization.

    6. Application Stack – Deploying a React.js frontend and a Node.js backend with continuous deployment.

    7. Scalability & High Availability – Ensuring redundancy through multi-node Kubernetes clusters.

🔹 Project Objectives:

✔️ Automate infrastructure provisioning with Terraform.
✔️ Deploy and manage applications using ArgoCD and Helm charts.
✔️ Secure workloads with IAM policies, Security Groups, and Kubernetes RBAC.
✔️ Monitor system health using Prometheus and Grafana dashboards.
✔️ Implement DevSecOps principles to ensure secure and compliant deployments.

🔹 Why This Project?

This project demonstrates a modern DevSecOps pipeline by integrating security into CI/CD, reducing manual intervention, and enhancing reliability and observability. It provides a real-world cloud-native approach to automated deployments and secure software delivery.

Would you like me to break this down further into sections for your documentation? 🚀

  1. Potential improvements (e.g., adding Service Mesh, extending security features) :- 1️⃣ Implementing a Service Mesh (e.g., Istio or Linkerd)

    Currently, microservices communicate within the Kubernetes cluster, but implementing a Service Mesh like Istio or Linkerd can provide:
    🔹 Traffic Control & Load Balancing – Ensures optimal service-to-service communication.
    🔹 Enhanced Security – Mutual TLS (mTLS) encrypts all internal traffic.
    🔹 Observability & Tracing – Advanced monitoring and tracing of requests between microservices.


    2️⃣ Strengthening Security Features

    🔹 Integrate OPA (Open Policy Agent) & Kyverno for policy enforcement on Kubernetes.
    🔹 Use Vault (HashiCorp) for secrets management instead of storing secrets in Kubernetes.
    🔹 Enable Pod Security Policies (PSP) or switch to Pod Security Standards (PSS) for better container security.
    🔹 Implement Static & Dynamic Security Scanning using Snyk, Trivy, or Clair to detect vulnerabilities in containers & dependencies.


    3️⃣ Implementing Advanced CI/CD Pipelines

    🔹 Use Jenkins, GitHub Actions, or GitLab CI/CD with ArgoCD for more flexibility.
    🔹 Enable progressive delivery strategies like Blue-Green Deployments and Canary Releases for safer rollouts.


    4️⃣ Enhancing Observability & Alerting

    🔹 Extend Grafana dashboards to include application-level logs with Loki.
    🔹 Use Prometheus AlertManager to trigger alerts based on metrics.
    🔹 Add Jaeger or OpenTelemetry for distributed tracing of microservices.


    5️⃣ Enabling Multi-Cloud & Hybrid Deployments

    🔹 Extend support for multi-cloud deployment (AWS, GCP, Azure).
    🔹 Implement Kubernetes Federation to manage clusters across multiple cloud providers.
    🔹 Deploy on-premise Kubernetes clusters using Rancher or K3s for hybrid setups.

  2. Final Thought : - The DevSecOps Mega Project successfully integrates security, automation, and observability into a scalable cloud-native infrastructure. By leveraging Kubernetes, ArgoCD (GitOps), Terraform, and AWS services, this project demonstrates how organizations can achieve continuous deployment while maintaining high security and compliance standards.

    Key takeaways from this project:
    Automated Deployments: GitOps approach with ArgoCD ensures smooth and version-controlled deployments.
    Security-First Mindset: IAM policies, security groups, and RBAC in Kubernetes enforce strict access controls.
    Observability & Monitoring: Prometheus & Grafana provide real-time insights into application health and performance.
    Scalability & Resilience: Multi-node EKS cluster ensures high availability and failover capabilities.
    Infrastructure as Code (IaC): Terraform enables easy and reproducible infrastructure management.

    However, while this project lays a strong foundation for DevSecOps, there are several areas where it can be further enhanced.

0
Subscribe to my newsletter

Read articles from SUMIT DESHMUKH directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

SUMIT DESHMUKH
SUMIT DESHMUKH

I am a student at AITR indore. I am learning devops as well with my studies.