🚀 Hub-Spoke GitOps Setup with Minikube + ArgoCD

Managing multiple Kubernetes clusters can quickly get complex. That’s where the Hub-Spoke GitOps pattern comes in handy. In this setup, we use ArgoCD on a hub cluster to manage workloads deployed to one or more spoke clusters. This project is inspired by @iam-veeramalla’s repo and demonstrates how to use ArgoCD for GitOps-based deployment in a Hub-and-Spoke cluster setup.

👉 The twist?
Instead of running this on AWS EKS (paid), we’ll set up both Hub and Spoke clusters locally using Minikube — so it’s completely free.

🛠 Tools Used

  • Kubernetes → Minikube for local clusters

  • ArgoCD → GitOps tool for continuous delivery

  • kubectl → Kubernetes CLI

  • GitHub → To store application manifests


📂 Architecture

  • Hub cluster → Runs ArgoCD and manages deployments.

  • Spoke cluster(s) → Target clusters where apps are deployed.

  • Git repository → Contains app manifests and configuration.

flowchart TD
    GitHub[(Git Repository)] --> Hub[Hub Cluster (ArgoCD)]
    Hub --> Spoke1[Spoke Cluster 1]
    Hub --> Spoke2[Spoke Cluster 2]

💻 Step-by-Step Setup

1️⃣ Install Prerequisites

# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

2️⃣ Start Hub Cluster (ArgoCD Cluster)

minikube start -p hub --cpus=2 --memory=4g
kubectl config use-context hub

3️⃣ Install ArgoCD in Hub Cluster

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Check pods:

kubectl get pods -n argocd

4️⃣ Expose ArgoCD Server via NodePort

By default, ArgoCD is a ClusterIP service. Patch it to NodePort:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get svc argocd-server -n argocd

Sample output:

argocd-server   NodePort   10.96.12.115   <none>   80:31577/TCP,443:30443/TCP   1m

Get Minikube IP:

minikube ip -p hub

👉 Access ArgoCD at:
http://<minikube-ip>:<nodeport>

Example:
http://192.168.49.2:31577

Get admin password:

kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Login with:

  • Username: admin

  • Password: (from above command)


5️⃣ Start Spoke Cluster (Target Cluster)

minikube start -p spoke --cpus=2 --memory=4g

6️⃣ Add Spoke Cluster to ArgoCD

kubectl config use-context hub
kubectl config get-contexts   # find spoke context name
argocd cluster add <spoke-context-name>

7️⃣ Prepare GitOps Repo

In your GitHub repo (e.g., argocd-hub-spoke-demo), create a folder:

manifests/guest-book/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: guestbook-ui
spec:
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: guestbook-ui
  template:
    metadata:
      labels:
        app: guestbook-ui
    spec:
      containers:
      - image: gcr.io/heptio-images/ks-guestbook-demo:0.2
        name: guestbook-ui
        ports:
        - containerPort: 80

8️⃣ Create ArgoCD Application

In ArgoCD UI →

  • Name: demo-app

  • Project: default

  • Repository URL: your GitHub repo

  • Path: manifests/guest-book

  • Destination Cluster: spoke cluster

  • Namespace: default


9️⃣ Sync and Deploy

Click SYNC → ArgoCD will pull from GitHub and deploy to the spoke cluster.

Verify:

kubectl get pods --context=spoke

🎯 Outcome

✅ Multi-cluster GitOps replicated locally
✅ Works just like AWS EKS hub-spoke setups
Zero cloud charges
✅ Hands-on experience with ArgoCD + GitOps

Github :- https://github.com/Harshalv21/argocd-hub-spoke-demo.git

0
Subscribe to my newsletter

Read articles from HARSHAL VERNEKAR directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

HARSHAL VERNEKAR
HARSHAL VERNEKAR

🚀 Aspiring DevOps & Cloud Engineer with a strong foundation in cloud platforms (AWS), infrastructure automation, and container orchestration tools like Docker and Kubernetes. I’m passionate about building reliable, scalable, and secure cloud-native applications. 🔧 Currently building real-world projects using Terraform, Ansible, Jenkins, GitHub Actions, and EKS to understand how modern infrastructure is deployed, managed, and monitored. I enjoy breaking things (safely), debugging, and learning from hands-on experience. 📦 Comfortable working with: AWS (EC2, S3, IAM, VPC, EKS) Docker, Kubernetes (Minikube & EKS) CI/CD tools like Jenkins & GitHub Actions IaC tools like Terraform & Ansible Monitoring with Prometheus & Grafana Linux, Bash, Git, and Networking fundamentals 💡 Always learning — currently exploring deeper concepts in Kubernetes workloads, Helm, and scaling best practices. 🔍 Open to DevOps, Cloud, or SRE roles where I can grow, contribute, and solve real-world problems.