Putting Kubernetes Knowledge to the Test


Introduction
Hello there 👋! Sorry for the delay with last week's article - exams kept me busy. The past couple of weeks of my DevOps learning journey have been incredibly rewarding. After completing my deep dive into Kubernetes concepts in the previous weeks, I decided it was time to put all that theoretical knowledge to the test. This week, I focused on building two comprehensive Kubernetes projects that would help me understand how all these concepts work together in real-world scenarios.
Project 1: Full-Stack Chat Application on MiniKube
The first project I tackled was deploying a three-tier chat application consisting of a React.js frontend, Node.js backend, and MongoDB database. What I found interesting about this project was how it brought together so many Kubernetes concepts I had learned - deployments, services, persistent volumes, secrets, and ingress controllers.
Architecture and Components
The application architecture was straightforward but comprehensive:
Frontend: React.js application for the user interface
Backend: Node.js API server handling chat logic
Database: MongoDB for storing chat messages and user data
Key Kubernetes Concepts Applied
Persistent Storage: One of the most important aspects I learned was implementing persistent storage for MongoDB. I created a Persistent Volume and Persistent Volume Claim to ensure that chat data wouldn't be lost when pods were restarted. For MiniKube, I used hostPath
which stores data directly on the host machine.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/mongodb
Secrets Management: I learned how critical it is to properly handle sensitive information. The application required a JWT secret key and a MongoDB connection string, which I stored as Kubernetes secrets with Base64 encoding. This was my first real experience with securely managing credentials in Kubernetes.
Service Discovery: While learning Kubernetes, what impressed me while learning Kubernetes was seeing how services could connect to each other simply using service names. When building this project, seeing the backend connect with MongoDB using just mongodb:27017
demonstrated the automatic service discovery within the cluster perfectly.
Ingress Controller: Instead of using port-forwarding for access, I configured an ingress controller to route traffic based on hostnames. I had to add chats.aks.com
to my local hosts file to access the application through a custom domain.
Docker Hub Integration
Before deploying to Kubernetes, I had to build and push Docker images for both frontend and backend to Docker Hub. This step taught me about the importance of having images accessible to the cluster.
Project 2: Voting Application with GitOps using ArgoCD
The second project was significantly more complex and introduced me to the world of GitOps. I deployed a microservices voting application using ArgoCD on a multi-node Kubernetes cluster running on AWS EC2.
Infrastructure Setup
I started by setting up a Kind (Kubernetes in Docker) cluster on an AWS EC2 instance. This is how I created a multi-node cluster configuration:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
This gave me hands-on experience with managing a cluster that closely resembles a production environment with separate control plane and worker nodes.
ArgoCD and GitOps Implementation
ArgoCD Installation: Installing ArgoCD was my first deep dive into GitOps tooling. I learned how to create dedicated namespaces, patch services to expose them externally, and manage RBAC for secure access.
GitOps Workflow: The most fascinating part was experiencing true GitOps in action. I configured ArgoCD to:
Automatically sync applications from my GitHub repository
Monitor changes in the
k8s-specifications
folderApply updates to the cluster whenever I pushed changes to GitHub
Real-time Synchronization: I tested the GitOps functionality by changing replica counts directly in GitHub. Within minutes, ArgoCD detected the changes and automatically updated the cluster. Seeing pods scale up and down based on Git commits was incredibly satisfying and showed me the power of GitOps for production deployments.
Microservices Architecture
The voting application consisted of multiple microservices:
Vote Service (Python) - Frontend for casting votes
Result Service (Node.js) - Dashboard showing vote results
Worker Service (.NET) - Processes votes from Redis to PostgreSQL
Redis - Message queue for vote processing
PostgreSQL - Database for storing vote results
This project taught me how different technologies can work together seamlessly in a Kubernetes environment, with each service handling its specific responsibility.
Monitoring with Prometheus and Grafana
I also implemented a Prometheus-Grafana monitoring stack using Helm charts to monitor and visualize the voting application. This was my first hands-on experience with Helm charts in a real project, and it perfectly demonstrated why Helm is considered the package manager for Kubernetes.
I used the official Prometheus community Helm chart which automatically deployed Prometheus server, Alertmanager, and Grafana with pre-configured dashboards. Setting up the entire monitoring stack with a single helm install
command was incredibly efficient compared to manually creating dozens of YAML files.
The Grafana dashboards provided real-time insights into cluster metrics, pod resource usage, and application performance. I could visualize CPU and memory consumption across all microservices, monitor the health of PostgreSQL and Redis, and track request rates to the vote and result services. This monitoring setup gave me a clear understanding of how the voting application was performing under load and helped me identify potential bottlenecks.
What I Learned from These Projects
Real-world Application Architecture: Both projects showed me how theoretical Kubernetes concepts translate into practical applications. Understanding how frontend services communicate with backend APIs, and how those APIs connect to databases, gave me a complete picture of modern application deployment.
Security Best Practices: Working with secrets, RBAC, and service accounts taught me the importance of security in Kubernetes. I learned that proper access control and secret management aren't optional - they're fundamental requirements.
GitOps Philosophy: The ArgoCD project introduced me to the GitOps approach, where Git becomes the single source of truth for infrastructure and applications. This paradigm shift from push-based to pull-based deployments feels much more reliable and auditable.
Networking Understanding: Configuring services, ingress controllers, and port forwarding gave me practical experience with Kubernetes networking. I now understand how traffic flows within a cluster and how to expose applications externally.
Persistent Storage: Managing stateful applications like databases taught me about persistent volumes, storage classes, and the importance of data persistence in containerized environments.
Challenges I Faced
1️⃣ MiniKube Ingress Controller Issues
Initially, the ingress controller wasn't routing traffic properly to my chat application. The frontend was accessible, but API calls to the backend were failing.
Solution: I discovered that I needed to enable the MiniKube ingress addon with minikube addons enable ingress
and ensure that the ingress resource was properly configured with the correct service names and ports.
2️⃣ ArgoCD UI Access Problems
After installing ArgoCD, I couldn't access the web interface even though the pods were running. The service was only accessible within the cluster.
Solution: I had to patch the ArgoCD server service to change its type from ClusterIP
to NodePort
, then use port forwarding with the --address 0.0.0.0
flag to make it accessible from my browser. I also had to add the appropriate inbound rules to the EC2 security group.
3️⃣ Docker Permission Issues on EC2
When setting up Kind on the EC2 instance, I kept getting "permission denied" errors when trying to run Docker commands.
Solution: I needed to add the ubuntu user to the Docker group with sudo usermod -aG docker ubuntu
and then refresh the group membership with newgrp docker
. This allowed me to run Docker commands without sudo.
4️⃣ JWT Token Authentication Issues in Chat Application
Even after properly setting up the JWT secret in Kubernetes secrets, the frontend was continuously logging users out of the chat application. Users could register and login initially, but within seconds they would be automatically logged out, making the application unusable.
Solution: I discovered that the issue was with how the JWT secret was being referenced in the backend deployment. The environment variable name in the deployment YAML had to exactly match what the backend was expecting. I also learned that the base64 encoded secret in Kubernetes gets automatically decoded when injected as an environment variable, so the backend was receiving the correct plaintext value.
Resources I Used
Let's Connect!
If you have any recommended resources, better approaches to my challenges, or insights, I'd love to hear them! Drop your thoughts in the comments.
Have a wonderful day!
Subscribe to my newsletter
Read articles from Akshansh Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Akshansh Singh
Akshansh Singh
Driven by curiosity and a continuous learning mindset, always exploring and building new ideas.