How to Build a CI/CD Pipeline with ArgoCD on Civo Kubernetes
In this blog post, we will build a complete production-ready CI/CD setup for deploying microservices using ArgoCD on a Civo Kubernetes cluster. We will also integrate PostgreSQL using Persistent Volumes (PV) and Persistent Volume Claims (PVC) to ensure our database remains persistent.
Prerequisites
A running Civo Kubernetes cluster (if you don’t have one, check out the Civo CLI to create one).
Basic knowledge of Kubernetes, Docker, and GitOps.
Docker Hub account (for pushing your microservice images).
PostgreSQL and Kubernetes basics.
ArgoCD installed on your Civo Kubernetes cluster (we will cover the installation below).
Step 1: Installing ArgoCD on Civo Kubernetes Cluster
First, we need to install ArgoCD to manage our application deployments using GitOps principles.
1.1 Install ArgoCD
Execute the following commands to install ArgoCD:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
1.2 Access ArgoCD UI
ArgoCD comes with a web-based UI to manage deployments. To access it:
- Set up port forwarding:
kubectl port-forward svc/argocd-server -n argocd 8080:443
- Open a browser and navigate to
https://localhost:8080
(if did not work,move to incognito mode and then go to localhost:8080). You will need to log in with the default admin user:
- Get the initial admin password:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d
- Username:
admin
You now have access to the ArgoCD UI where you can manage your applications.
Step 2: Setting Up GitOps Workflow for Microservices
In this step, we’ll configure a user-service
as a microservice. The microservice uses PostgreSQL for its database and connects to it via environment variables.
2.1 Preparing Kubernetes Manifests for user-service
We will define a user-service
deployment and service. Additionally, we will set up PostgreSQL with a persistent volume claim (PVC) for persistent storage.
Create the following YAML files and commit them to your Git repository.
2.1.1 user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: <your-docker-image>
ports:
- containerPort: 8081
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: POSTGRES_DB
value: "user_db"
- name: POSTGRES_HOST
value: "postgres-service"
2.1.2 user-service-service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 80
targetPort: 8081
type: ClusterIP
Step 3: PostgreSQL with Persistent Storage
3.1 PostgreSQL Deployment and PVC
Next, we'll define PostgreSQL and link it with a Persistent Volume Claim (PVC) to ensure data persists even if the PostgreSQL pod restarts.
3.1.1 postgres-deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
POSTGRES_USER: cG9zdGdyZXM= # base64 encoded "postgres"
POSTGRES_PASSWORD: cGFzc3dvcmQ= # base64 encoded "password"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: POSTGRES_DB
value: "user_db"
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
3.1.2 postgres-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres
type: ClusterIP
This configuration ensures that the PostgreSQL pod has persistent storage for the database.
Step 4: Setting Up ArgoCD Application
Now, we need to tell ArgoCD to monitor our Git repository and automatically sync our application to Kubernetes whenever changes are made.
4.1 Create ArgoCD Application
We’ll create an ArgoCD application for the user-service
using the following YAML file.
4.1.1 user-service-argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: user-service
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
source:
repoURL: 'https://github.com/yourusername/user-service-repo.git'
targetRevision: HEAD
path: manifests
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
4.2 Apply the ArgoCD Application
Once the YAML file is ready, apply it to your cluster:
kubectl apply -f user-service-argocd-application.yaml
This will instruct ArgoCD to automatically sync your Kubernetes manifests from your Git repository.
Step 5: Setting Up Ingress (Optional)
To expose your microservice, create an ingress resource and secure it with TLS (Let’s Encrypt). Here’s a sample ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- your-domain.com
secretName: tls-secret
rules:
- host: your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
Step 6: Testing and Monitoring with ArgoCD
Push Changes to GitHub: Modify your Kubernetes manifests and push them to your Git repository.
Observe Sync in ArgoCD UI: Head over to the ArgoCD UI to see the synchronization of your application in action.
Deployment: Your Kubernetes cluster will automatically deploy or update the application based on changes pushed to the Git repository.
Conclusion
With this setup, you now have a fully production-ready CI/CD pipeline using ArgoCD on Civo Kubernetes. Every time you push changes to your Kubernetes manifests in your Git repository, ArgoCD will automatically sync those changes and ensure your application state in the cluster matches the desired state in Git.
Additionally, PostgreSQL is configured with a Persistent Volume for persistent data, ensuring the database survives pod restarts.
Feel free to extend this setup with more microservices, monitoring tools like Prometheus, and alerting systems as needed for your production environment.
Happy deploying with ArgoCD!
Subscribe to my newsletter
Read articles from Sundaram Kumar Jha directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Sundaram Kumar Jha
Sundaram Kumar Jha
I Like Building Cloud Native Stuff , the microservices, backends, distributed systemsand cloud native tools using Golang