How I Built a Modern GitOps Pipeline for an Enterprise IT Department.


Disclaimer: This project was created for educational and demonstration purposes only with proper authorization obtained before implementation. All infrastructure and resources shown in this post have been decommissioned following the completion of the demo.
Introduction
In this post, I'll share my journey building and deploying a modern, production-ready web application for a large enterprise IT department. My goal was to create a secure, scalable, and fully automated deployment pipeline using Terraform, AWS EKS, Docker, Kubernetes, CircleCI, and Argo CD. I'll walk you through every step, show you real screenshots from my process, and explain the "why" behind each decision.
Project Overview
The mission was to deliver a robust, automated workflow for the organization's official web presence: a Vue.js single-page application (SPA) that could be updated and deployed with maximum efficiency and security. I wanted to leverage the best of modern DevOps and GitOps practices, ensuring that every change, from infrastructure to application code, was versioned, auditable, and repeatable.
Architecture & Repository Structure
I split the project into three main repositories, each with a clear responsibility:
Frontend Application (
dit-gaf-web
)
This is the Vue.js 3 SPA, built with Vite and TypeScript, and fully containerized with Docker. It also contains the Argo CD application manifest that enables GitOps.Kubernetes Manifests (
dit-gaf-manifest
)
This repo holds all the Kubernetes YAMLs (deployments, services, etc.) for the app. Argo CD watches this repo and keeps the cluster in sync.Infrastructure as Code (
dit-gaf-infra
)
Here, I used Terraform to provision everything on AWS: VPC, EKS cluster, node groups, IAM roles, and networking.
Here’s a look at my EKS cluster after provisioning:
Step 1: Provisioning Infrastructure with Terraform
I started by setting up the AWS infrastructure using Terraform. This included a secure VPC, an EKS cluster, node groups (using Bottlerocket AMI for security), NAT gateways, and all the necessary IAM roles.
My process:
git clone <infra-repo-url>
cd dit-gaf-infra
cp terraform.tfvars.example terraform.tfvars
# I edited terraform.tfvars to set project_name, region, and other settings
terraform init
terraform plan
terraform apply
After Terraform finished, I configured kubectl
to access the new cluster:
terraform output kubectl_config_command
# Example output:
aws eks update-kubeconfig --region us-west-2 --name dit-gaf-eks-cluster
To verify everything was up, I ran:
kubectl get nodes
kubectl get pods -n kube-system
Step 2: Building and Containerizing the Application
Next, I built the Vue.js SPA and containerized it with Docker. This ensures consistency across all environments and makes deployments much easier.
Here’s what I did:
git clone <frontend-repo-url>
cd dit-gaf-web
npm install
npm run build
docker build -t <your-dockerhub-username>/dit-gaf-web:latest .
docker run -d -p 8080:80 <your-dockerhub-username>/dit-gaf-web:latest
Here’s the Docker build process:
Step 3: Setting Up CI/CD with CircleCI
To automate the build and deployment process, I set up CircleCI. Every time I pushed code to the main branch, CircleCI would:
Build the Docker image
Push it to Docker Hub
Update the image tag in the Kubernetes manifest repo
Here’s a screenshot of my CircleCI pipeline in action:
Step 4: Installing and Configuring Argo CD
With the infrastructure and CI in place, I moved on to setting up Argo CD for GitOps-based continuous delivery.
I installed Argo CD on my EKS cluster:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl get pods -n argocd
To access the Argo CD UI locally:
kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Then I logged in at https://localhost:8080
Argo CD UI:
After executing the command to port-forward the Argo CD server and extracting the initial admin password, I used the credentials username admin and the decoded password to log in to the Argo CD UI at https://localhost:8080
Upon successful login, I was presented with the console interface.
Step 5: Connecting Argo CD to My Application
Inside my frontend repo (dit-gaf-web
), I had an argocd/application.yaml
file. This file tells Argo CD to monitor my manifest repo and automatically sync any changes to the EKS cluster.
Here’s the key part of my workflow:
I proceeded to deploy my application on Argo CD
kubectl apply -f argocd/application.yaml
And as demonstrated in the attached screenshot, the sync status confirms the successful deployment.
Step 6: Verifying the Deployment
With everything connected, Argo CD took over and deployed my application to the cluster. I used kubectl
to check the status of all resources in the dit-gaf-web
namespace:
kubectl get all -n dit-gaf-web
Here’s the output showing my pods, service, deployment, and replicasets:
Additional screenshots are provided below:
Step 7: Accessing the Live Application
Once the service was up, I grabbed the external hostname from the LoadBalancer and accessed the live site.
Here’s the app running on AWS:
And here’s the service hostname as seen in Argo CD:
Step 8: Monitoring and Troubleshooting
Throughout the process, I regularly checked the health of my cluster and application:
kubectl get nodes
kubectl get pods --all-namespaces
kubectl get applications -n argocd
kubectl get pods -n dit-gaf-web
kubectl get svc -n dit-gaf-web
If anything went wrong, I’d check logs and events:
bashCopykubectl logs <pod-name> -n dit-gaf-web
kubectl describe pod <pod-name> -n dit-gaf-web
kubectl get events -n dit-gaf-web --sort-by='.lastTimestamp'
Key Takeaways
Infrastructure as Code with Terraform made my AWS setup repeatable and secure.
Containerization with Docker ensured consistency from development to production.
CI/CD with CircleCI automated my build and deployment pipeline.
GitOps with Argo CD gave me full control, traceability, and self-healing deployments.
Kubernetes provided a scalable, resilient platform for my application.
Subscribe to my newsletter
Read articles from Enoch directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Enoch
Enoch
I have a passion for automating and optimizing cloud infrastructure. I have experience working with various cloud platforms, including AWS, Azure, and Google Cloud. My goal is to help companies achieve scalable, reliable, and secure cloud environments that drive business success.