Kubernetes Deployments: ArgoCD and GitHub Actions in Action

Jack JaparJack Japar
7 min read

I recently developed a distributed voting application using Spring Boot and Kafka. So, I decided to build a CI/CD pipeline for this project and deploy it into a Kubernetes cluster. I containerized the services with Docker and set up a GitHub Actions pipeline to push version-tagged images to DockerHub. ArgoCD then auto-syncs those images into a Kubernetes cluster configured on AWS EC2. This setup ensures that whenever code is committed and merged into the main branch, a new version of the app is rolled out seamlessly.

In this post, I’ll walk you through everything step by stepβ€”from setting up a simple Kubernetes cluster to configuring a CI/CD pipeline with ArgoCD and GitHub Actions.

Outline

  • Spin up a new K3s cluster on AWS EC2

  • Kubernetes specification files for the project

  • Install ArgoCD on the cluster

  • Configure GitHub Actions

  • Test the CI/CD pipeline


Project Overview: CI/CD Setup

Below is the application we want to deploy. It's a simple distributed voting application orchestrated with Docker containers. You can find the source code here:
πŸ”— https://github.com/devsteppe9/voting_app

Here’s a visual overview of the directory structure:

β”œβ”€β”€ .github               # GitHub Actions workflow files
β”‚   └── workflows
β”‚       β”œβ”€β”€ build-result.yaml
β”‚       β”œβ”€β”€ build-vote-session.yaml
β”‚       β”œβ”€β”€ build-vote.yaml
β”‚       └── build-worker.yaml
β”œβ”€β”€ docker-compose.yml    # Local development
β”œβ”€β”€ k8s-specifications    # Kubernetes manifests
β”œβ”€β”€ result                # Node.js web app for real-time results
β”œβ”€β”€ vote                  # Spring Boot/Thymeleaf vote submission app
β”œβ”€β”€ vote-session          # Spring Boot REST API to manage sessions
└── worker                # Spring Boot service to persist votes

Spin Up a New K3s Cluster on AWS EC2

If you already have a Kubernetes cluster running, feel free to skip this section.

I launched a t4g.medium Ubuntu EC2 instance and saved the .pem key for later SSH access. If you're not familiar, K3s is a lightweight, production-ready Kubernetes distribution developed by Rancher Labs. If you noticed, I also launched ARM based Ubuntu instance because I am using ARM based MacBook at home and it was comfortable to push images directly from my laptop when I needed to quickly launch Docker images from my laptop.

Make sure to open these ports on your EC2 Security Group:

  • 8080: ArgoCD UI

  • 22: SSH access

  • 6443: Kubernetes API

  • 31000–31002: Application ports

πŸ’‘
I have exposed these ports to the world 0.0.0.0/0 If you have a specific public IP address, it is better to set up 8080,22, 6443 ports accessible only from your specific IP address range.

To bootstrap the K3s cluster, I used cool tool k3sup (said 'ketchup'), built by Alex Ellis. From your laptop:

curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/
k3sup --help

Now install K3s to your EC2 instance:

πŸ’‘ Replace $IP and key path with your own. $HOME/controlplanekeypair.pem is a private key path on my laptop, I saved in this path while I launched an EC2 instance.

export IP=54.90.96.48
k3sup install --ip $IP --user ec2-user \
  --ssh-key $HOME/controlplanekeypair.pem

It might take couple of minutes and if you don’t see any errors from command output above voila! Your kubernetes cluster ready to run and your kubeconfig file is located in your local machine.

Verify:

export KUBECONFIG=`pwd`/kubeconfig
kubectl config use-context default
kubectl get node -o wide

# -------- Output ------- #
NAME                           STATUS   ROLES                  AGE    VERSION
ip-172-31-85-66.ec2.internal   Ready    control-plane,master   7m4s   v1.32.5+k3s1

Kubernetes Specification Files

These are the deployment and service specs I created to deploy the app. ArgoCD watches these files and updates deployments when GitHub Actions push new images to DockerHub:

β”œβ”€β”€ k8s-specifications
β”‚   β”œβ”€β”€ kafka-deployment.yaml
β”‚   β”œβ”€β”€ kafka-service.yaml
β”‚   β”œβ”€β”€ result-deployment.yaml
β”‚   β”œβ”€β”€ result-service.yaml
β”‚   β”œβ”€β”€ vote-db-deployment.yaml
β”‚   β”œβ”€β”€ vote-db-service.yaml
β”‚   β”œβ”€β”€ vote-deployment.yaml
β”‚   β”œβ”€β”€ vote-service.yaml
β”‚   β”œβ”€β”€ vote-session-deployment.yaml
β”‚   β”œβ”€β”€ vote-session-service.yaml
β”‚   └── worker-deployment.yaml

More details here: k8s-specifications

Photo by Jack Japar πŸ˜‰


Install ArgoCD on Kubernetes

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# -------- Output ------- #
NAME                                                READY   STATUS    RESTARTS    AGE
argocd-application-controller-0                     1/1     Running   0           90s
argocd-applicationset-controller-777d5b5dc7-w8blz   1/1     Running   0           90s
argocd-dex-server-7d8fcd845-lg9hr                   1/1     Running   0           90s
argocd-notifications-controller-655df7c996-q2vp4    1/1     Running   0           90s
argocd-redis-574484f6db-ssf2c                       1/1     Running   0           90s
argocd-repo-server-57449f957c-cdjc5                 1/1     Running   0           90s
argocd-server-7dd4c8cf5f-6x68f                      1/1     Running   0           90s

Expose ArgoCD on NodePort:

cat <<EOF > argocd-server-service.yml
apiVersion: v1
kind: Service
metadata:
  name: argocd-server-nodeport
  labels:
    app.kubernetes.io/name: argocd-server
    app.kubernetes.io/component: server
    app.kubernetes.io/part-of: argocd
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 8080
      nodePort: 31002
      protocol: TCP
  selector:
    app.kubernetes.io/name: argocd-server
EOF

kubectl create -f argocd-server-service.yml -n argocd

The service above exposes the ArgoCD GUI on port 31002

Access http://YOUR_IP:31002 in your browser:

Get the admin password:

kubectl get secret argocd-initial-admin-secret -n argocd \
  -o jsonpath={.data.password} | base64 -d

To log in ArgoCD dashboard, usethe password you got from above command and use admin as a username.


Create ArgoCD App

  1. Go to Applications β†’ New App β†’ Edit as YAML

  2. Paste:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vote-app
spec:
  destination:
    namespace: vote-app
    server: https://kubernetes.default.svc
  source:
    repoURL: https://github.com/devsteppe9/voting_app
    path: k8s-specifications
    targetRevision: main
  project: default
  syncPolicy:
    automated:
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
  1. Then click Create


GitHub Actions Setup

I prepared 4 workflow files under the .github/workflows directory. Each of them detects code changes under result, vote-session, vote and worker subdirectories respectively.

β”œβ”€β”€ .github               # Github Actions workflow files
β”‚   └── workflows
β”‚       β”œβ”€β”€ build-result.yaml       # workflow for result app
β”‚       β”œβ”€β”€ build-vote-session.yaml # workflow for vote-session app
β”‚       β”œβ”€β”€ build-vote.yaml         # workflow for vote app
β”‚       └── build-worker.yaml       # workflow for worker app

The workflow file below is for vote service, which is one of the 4 workflows above. The remaining 3 workflows are similar, the only difference is the Docker image name tags and on.push.paths field values.

name: Integrate vote app

on:
  push:
    branches:
      - main
    paths:
      - 'vote/**'
env:
  DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
  DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}

permissions:
  contents: write

jobs:
  build-vote-app:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ env.DOCKERHUB_USERNAME }}
          password: ${{ env.DOCKERHUB_TOKEN }}

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Build and push Docker image
        uses: docker/build-push-action@v6
        with:
          context: ./vote
          platforms: linux/amd64,linux/arm64
          push: true
          tags: |
            ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}
            ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:latest
          build-args: |
            KAFKA_BOOTSTRAP_SERVERS=kafka:9092
            SESSION_API_URL=http://vote-session:8080/sessions
      - name: Update Kubernetes deployment
        # Replace image tag in deployment.yaml with new Docker image tagged by commit SHA
        run: |
          sed -i "s|image: ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:.*|image: ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}|g" k8s-specifications/vote-deployment.yaml
          echo "Updated image in k8s-specifications/vote-deployment.yaml"

      - name: Commit and push changes
        run: |
          git config --local user.name "github-actions[bot]"
          git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git add k8s-specifications/vote-deployment.yaml
          git commit -m "Update vote deployment image to ${{ env.DOCKERHUB_USERNAME }}/voting_app-vote:${{ github.sha }}"
          git pull origin main --rebase || false
          git push origin main

The workflow builds Docker images for linux/amd64 and linux/arm64 architectures, pushes to DockerHub, and updates the corresponding vote-deployment.yaml with the new tag.

You need to set up DockerHub secrets on your GitHub Repository. Below is the guideline on how to set up secrets in your GitHub repository: Using secrets in GitHub Actions


Test the Deployment

As shown in the ArgoCD dashboard, the application has successfully synced changes. I’ve experimented with adding and removing some code on result service, then pushed it into main branch of the repository. As you see, there are 7 more revisions created for the result service, and the latest one is serving, running as a pod.

Test your application:

  • Vote App: http://YOUR_IP:31000/votes/1

  • Result App: http://YOUR_IP:31001/results/1


Recap

GitHub Actions

  • Builds and pushes Docker images

  • Update Kubernetes deployment files

ArgoCD

  • Monitors k8s-specifications/ directory

  • Auto-syncs updated manifests into the K8s cluster

In conclusion, deploying a distributed voting application using Kubernetes, ArgoCD, and GitHub Actions provides a robust and automated CI/CD pipeline. Additionally, you can extend your workflow by adding Docker image scanning and Linting stages, which check the syntax of your source code, among other features. But these are not covered in this blog post. By integrating these technologies, developers can efficiently manage application deployments, monitor changes, and ultimately enhance the overall development and deployment process.

0
Subscribe to my newsletter

Read articles from Jack Japar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jack Japar
Jack Japar