Deploying Microservices on Kubernetes with Helm — Lessons from the Trenches

Author: Taranpreet SIngh
Special thanks: Cloud Champ 🙌

Introduction

This blog documents my journey of deploying a microservices-based video-to-audio converter application using Kubernetes and Helm — but more importantly, the challenges I faced along the way and how I tackled them.

The experience taught me more than just YAML syntax or Helm charts — it gave me real insights into IAM roles, Kubernetes resource constraints, and the power of microservices architecture.

Project Summary (Briefly)

The app has four microservices:

  • Auth (with PostgreSQL + JWT)

  • Upload/Download (using MongoDB + RabbitMQ)

  • Video Processor (MP4 → MP3 conversion)

  • Notification (SNS/email alerts)

Each service runs as a separate Kubernetes deployment and communicates asynchronously.

🔗 https://github.com/12taran/microservices-python-app.git


Real-World Challenges I Faced (and Solved)


1. AWS EKS IAM Permissions — kubectl get nodes Not Working

When I set up my EKS cluster using eksctl, I ran into this issue:

kubectl get nodes

Error: No resources found / Unauthorized

Cause:

Although I had configured AWS CLI using my IAM user via:

aws configure

...my IAM user didn’t have the required permissions to access or manage the EKS cluster created via eksctl.

Solution:

For learning purposes, I temporarily switched to the root user (⚠️ not recommended in production), updated the kubeconfig using:

aws eks update-kubeconfig --region <region> --name <cluster-name>

Once I confirmed it worked, I made a note to later set up proper IAM roles and aws-auth config maps for safe and secure access — which I’ll cover in a follow-up post.

Lesson: Learning the fundamentals of IAM and role-based access in AWS is key before diving deep into Kubernetes on EKS.


2. Pod in Pending State — Disk Space Nightmare

Another real headache occurred when the notification service pod stayed stuck in a Pending state.

Diagnosis:

kubectl describe pod <pod-name>

Revealed the issue:

0/1 nodes are available: insufficient ephemeral storage.

My local minikube (or cloud node) was out of disk space.

Solution:

I simply scaled down some other services using:

kubectl scale deployment <deployment-name> --replicas=0

This freed up enough resources for the pending pod to be scheduled.

Why This Matters: Microservices FTW

Both of these problems actually highlighted the strengths of microservices:

  • I could independently scale deployments up/down as needed (e.g., scale notification service to 0 temporarily).

  • I didn’t need to bring down the entire application — each service runs in isolation.

  • Fine-grained control allowed me to debug and fix issues per service.


Tech Stack

  • Kubernetes + Helm: for orchestration and deployment

  • RabbitMQ: decoupling services and handling async tasks

  • PostgreSQL + MongoDB: for relational and document data

  • SNS (email): notifications

  • Python + FastAPI: lightweight services


Thanks to Cloud Champ

A special thanks to Cloud Champ, whose resources helped me overcome key Kubernetes and Helm challenges. Their hands-on, practical content is perfect for learning these tools deeply.


Final Thoughts

This was less about building a fancy product and more about understanding how to deploy, troubleshoot, and scale microservices in a cloud-native way. I'm still improving the architecture and will later revisit IAM permissions and monitoring.

If you're diving into Kubernetes and microservices — expect things to break. But that’s exactly where the learning happens.

0
Subscribe to my newsletter

Read articles from Taranpreet Batra directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Taranpreet Batra
Taranpreet Batra