Amazon Clone - CI/CD Pipeline with Kubernetes & Argo CD

This Amazon Clone project is built to showcase my DevOps skills using tools like Terraform, Docker, Kubernetes, Jenkins, and Argo CD. It covers everything from infrastructure setup to CI/CD automation, reflecting a real-world deployment workflow.

βš™οΈ Infrastructure Setup with Terraform & Tool Installation

To begin, I used Terraform to provision the infrastructure, including a VM instance that acts as my Jenkins master server. As part of the provisioning process, I used a shell script named install-script.sh, which installs all the essential DevOps tools required for this project.

πŸ”§ Tools Installed via install-script.sh:

  • Jenkins

  • SonarQube

  • Docker

  • Java (for Jenkins & Sonar)

  • Trivy (for image scanning)

  • npm (for frontend builds)

  • Helm

  • Terraform (for local use inside Jenkins)

πŸ” Injecting Jenkins Credentials via Script

After provisioning the server and installing Jenkins, I remotely accessed the instance to inject credentials required by Jenkins using a scripted approach. These included:

* GitHub Access Token (for source code integration)

* SonarQube Token (for code quality analysis)

* DockerHub Credentials (for image push)

* AWS Access Keys (for ECR or resource access)

I used Jenkins CLI and scripts to set these up programmatically.

πŸš€ CI/CD Pipeline with Jenkins

Once the infrastructure and tools were set up, I triggered a Jenkins pipeline designed to automate the entire application lifecycle β€” from infrastructure provisioning to deployment. The pipeline includes conditional stages, parameterized builds, and integrates with AWS, Docker, GitHub, and SonarQube.

The pipeline is written using Declarative Jenkins Pipeline syntax, and contains smart controls like:

* Skipping previously completed stages

* Triggering Terraform destroy independently

* Handling failed stages with a restart-from-last-point feature

πŸ”„ Pipeline Stages Breakdown

*πŸ”§ 1. Pipeline Setup

This stage prepares the workspace and checks if the previous pipeline failed mid-way. If so, it resumes from the last incomplete stage. It helps save time and resources during partial re-runs.

*πŸ“¦ 2. Git Checkout

The source code for the application is pulled from GitHub. This includes both the application code and Kubernetes manifests needed for deployment.

*☁️ 3. Terraform EKS Setup

Using Terraform, the pipeline provisions an Elastic Kubernetes Service (EKS) cluster on AWS. This includes setting up IAM roles, networking (VPC, subnets), and all the necessary resources to run Kubernetes workloads.

*πŸ” 4. SonarQube Code Analysis

In this stage, the Jenkins pipeline performs static code analysis using SonarQube. The quality of the codebase is evaluated against predefined coding standards and security rules. The results are sent to the SonarQube dashboard, providing visibility into technical debt and vulnerabilities.

*🐳 5. Docker Build & Push

This stage handles containerization of the application using Docker. Once built, the image is tagged and pushed to an AWS ECR repository, making it ready for deployment. Trivy (a vulnerability scanner) is also used here to scan the image for known CVEs before pushing.

*βš“ 6. Kubernetes Deployment with Argo CD

Deployment to Kubernetes is handled via GitOps using ArgoCD. The pipeline updates the Kubernetes manifest repository, and Argo CD automatically syncs and deploys the new version of the app to the EKS cluster.

This decouples CI from CD and provides real-time visibility of app state through ArgoCD’s UI.

*πŸ” 7. Terraform Destroy (At the End)

If the RUN_TERRAFORM_DESTROY parameter is enabled, this stage will tear down the entire EKS infrastructure using Terraform destroy. This is helpful during testing or when cleaning up cloud resources to avoid unwanted costs.

*🚒 Git Ops Deployment with Argo CD

Once the Docker image is pushed to the registry, the application is deployed to the EKS cluster using ArgoCD, following a GitOps approach.

In my Git Ops model, the Kubernetes manifests are stored in a same Git repository. Any change made to these manifests (like image version updates) is automatically synced and applied by ArgoCD.

🧩 How It Works

* The Jenkins pipeline updates the deployment.yml file with the new Docker image tag.

* This file lives in a separate Kubernetes manifest repository (e.g., amazon-clone-cicd-argo).

* Once the change is pushed, Argo CD detects it automatically and syncs the new state to the EKS cluster.

* Argo CD takes care of rolling out the new version and monitoring the status.

πŸ“Ί Argo CD Dashboard

Argo CD also provides a clean and interactive dashboard, where I can:

* Monitor deployment status

* View application history

* Roll back to previous versions

* Manually sync or pause deployments if needed

Here’s a snapshot of what we would typically see:

Complete Jenkins pipeline overview

πŸ“Š Monitoring with Prometheus & Grafana

To complete the production-grade setup, I integrated Prometheus and Grafana into the EKS cluster for real-time monitoring and visualization of application and cluster metrics

πŸ”Ž Prometheus Setup

Prometheus was installed in the cluster using Helm, and it’s responsible for scraping metrics from:

* Kubernetes nodes and pods

* Jenkins and application metrics (via exporters)

* Argo CD and system components (via service monitors)

Configuration was done using helm values to set up retention, scrape intervals, and resource limits.

πŸ“ˆ Grafana Dashboards

Grafana was also deployed using helm, connected to Prometheus as its data source.

I created custom dashboards to visualize:

* Pod CPU & Memory usage

* Node health and cluster performance

* Jenkins job stats

* HTTP response times and request rates for the app

🚦 Alerting (Optional)

Prometheus Alertmanager was also configured to trigger alerts on:

* Pod crashes or restarts

* High CPU/Memory usage

* Unhealthy services or nodes

These alerts can be routed to email, Slack, or any external system (I used email in testing).
insert grafana screen shot here

πŸ› οΈ Troubleshooting & What I Learned

While building this project, I faced a few real-world issues that taught me a lot:

πŸ” Jenkins Credentials Issue Automating Jenkins credentials (GitHub, AWS, Docker, etc.) was tricky. They didn’t persist properly at first. I solved this by writing a Groovy init script to auto-create them, making Jenkins fully hands-free.

⚠️ Unstable Jenkins Stages

Some stages broke during first-time runs or after build interruptions. I fixed this by adding checkpoint logic to resume from the last successful stage and handled errors more gracefully.

πŸŒ€ Kubernetes Pods in "Pending"

Some pods stayed stuck in "Pending" due to:

* Low node resource
* Missing storage class

I troubleshooted using kubectl describe and adjusted the configs β€” a great learning on how Kubernetes scheduling works.

βœ… Final Thoughts

This Amazon Clone project was a full-cycle DevOps implementation where I combined tools like Terraform, Jenkins, Docker, Kubernetes, Argo CD, SonarQube, Prometheus, and Grafana to automate everything from infrastructure provisioning to continuous deployment and monitoring.

Along the way, I faced real-world issues, fine-tuned my pipeline, and gained deeper hands-on experience with CI/CD, Git Ops, and Kubernetes troubleshooting. This blog is not just a showcase β€” it's a reflection of what I’ve learned and built from the ground up.

πŸ“ You can find the full script in my GitHub repo

0
Subscribe to my newsletter

Read articles from LakshmiRajyam Nalla directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

LakshmiRajyam Nalla
LakshmiRajyam Nalla