Exploring GitOps with Argo CD: Insights, Challenges, and Experiences


What led me to discover and try Argo CD
I was working as a full-stack development intern at a small startup, and they wanted to implement and test Argo CD for their new project. At that time, I was focused on backend and cloud work, so they approached me with this task and asked if I could give them a demo in three days. Initially, I was surprised by the tight deadline, but I decided to accept the challenge.
This was my first experience with Kubernetes and Argo CD. Even though I knew the basics of Kubernetes, it was difficult to work on it practically without any external help. That's when I turned to LLMs for assistance. In two days, I learned what I needed about Kubernetes and Argo CD and was ready for the demo on day three — or at least I thought I was.
Before diving into the full story, let's first understand what Argo CD is, what GitOps is, and how they are used.
Introduction: why GitOps and Argo CD matter
Picture this: It's 3 AM, you're on-call, and someone deployed a broken version to production. You're frantically trying to figure out what changed, when it changed, and how to roll back. Sound familiar?
This is exactly the kind of nightmare GitOps was designed to solve. Instead of mysterious deployment scripts and "it works on my machine" syndrome, GitOps treats your Git repository as the single source of truth for your entire infrastructure and application state.
What is Argo CD and why should I care?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Think of it as your deployment watchdog that sits in your cluster, continuously monitoring your Git repositories and automatically syncing any changes to your running applications.
The beauty of Argo CD lies in its simplicity: you commit your Kubernetes manifests to Git, and Argo CD handles the rest. No more manual kubectl apply
commands, no more wondering if your staging environment matches production, and no more deployment anxiety.
Here's why developers are falling in love with GitOps:
Declarative: Your desired state is clearly defined in Git
Versioned: Every change is tracked and auditable
Automated: Deployments happen automatically when you push code
Recoverable: Easy rollbacks to any previous state
Secure: All changes go through your normal Git workflow (PRs, reviews, etc.)
Prerequisites: What You Need to Know
To get started with Argo CD you don't need to be a Kubernetes wizard, but some basic familiarity will help.
Essential Knowledge:
Docker basics: Understanding containers, images, and how they work
Kubernetes fundamentals: Pods, Services, Deployments (even at a high level)
Git workflow: Cloning repos, committing changes, pushing code
YAML syntax: Since Kubernetes manifests are written in YAML
Command line comfort: You'll be using kubectl and other CLI tools
Required Tools:
Git (obviously)
Docker Desktop or equivalent
kubectl (Kubernetes CLI)
A GitHub account with a sample repository
About 30 minutes and a cup of coffee ☕
My biggest mistake: Using the AWS Free Tier for my Argo CD setup.
So, I initially thought, let's use the AWS Free Tier to set up my Argo CD, as I just needed a simple demo with a small application. If you are thinking the same then let me save you some headaches: it's technically possible but practically painful.
Here's what you'll run into:
Resource Constraints:
t2.micro instances (1 vCPU, 1GB RAM) struggle with Kubernetes
Argo CD itself needs ~500MB RAM minimum
Your sample applications need additional resources
The result? Constant pod evictions and frustrating performance
Networking Complexity:
Setting up proper ingress on AWS requires understanding load balancers, security groups, and VPCs
Free tier limits on data transfer can bite you during testing
SSL/TLS setup becomes an additional hurdle
Cost Surprises:
While EC2 instances might be free, you'll pay for EBS volumes, data transfer, and load balancers
A simple "free" setup can easily cost $20-30/month
My AWS Free Tier Disaster Story: I spent an entire weekend trying to get Argo CD running on a t2.micro instance. The cluster kept running out of memory, pods were getting killed randomly, and I couldn't access the UI reliably. After fighting with load balancers and security groups, I realized I was making this way harder than it needed to be.
What are better alternatives for trying Argo CD?
After my AWS adventure, I discovered that local development environments are not just easier—they're actually better for learning. Here are your best options:
1. Minikube (My Recommendation)
Single-node Kubernetes cluster on your local machine
Built-in dashboard and easy addon management
Perfect resource control—use what your laptop can handle
Simple networking with
minikube tunnel
2. Kind (Kubernetes in Docker)
Extremely fast startup (clusters in seconds)
Great for CI/CD and automated testing
Minimal resource overhead
Easy to reset and recreate
3. Docker Desktop
One-click Kubernetes enable in Docker Desktop
Seamless integration with your existing Docker workflow
Good for Mac and Windows users
Automatic resource management
4. k3s/k3d
Lightweight Kubernetes distribution
Perfect for resource-constrained environments
Fast and reliable
Great for edge computing scenarios
Why Local Development Wins:
No cloud complexity: Focus on learning Argo CD, not AWS networking
Instant feedback: Changes happen immediately
Cost: Completely free (except your electricity bill)
Reproducible: Easy to reset and start over
Offline capable: Work without internet once set up
So long story short, try local development if your system has enough power and capabilities.
The Bottom Line
GitOps with Argo CD isn't just a trend—it's a fundamental shift toward more reliable, auditable, and collaborative software delivery. The initial learning curve is worth it for the long-term benefits to your development workflow.
Start small, experiment with local clusters, and gradually work your way up to more complex scenarios. The beauty of GitOps is that once you understand the core concepts, they scale naturally from simple applications to complex, multi-service architectures.
I'm also a beginner, but I'm eager to try new and interesting technologies in the market. Next time, before diving right into the implementation I'll seek advice from someone in the industry who has experience with the technology.
For everything else, I can use AI models like ChatGPT, Gemini, Claude, and others.
I appreciate you taking the time to read my article. Stay tuned for more—keep building and learning!
Subscribe to my newsletter
Read articles from Atharva Solanke directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Atharva Solanke
Atharva Solanke
Backend Developer | Specializing Cloud & DevOps | MERN Full Stack I'm Atharva — a full-stack developer turned backend + DevOps specialist, on a mission to become one of the most demanded in this field. 🔧 I build production-grade applications using Node.js, MongoDB, and Docker⚙️ I automate deployments, manage CI/CD pipelines, and design cloud-native architectures🌐 Actively working on AWS, Kubernetes, and Infrastructure as Code (Terraform) 📌 Writing here to share everything I learn — from scalable backend patterns to real-world DevOps workflows. 📫 Let’s connect: atharvasolanke.in