Deploy a Production-Ready App on Kubernetes with Kubeadm


INTRODUCTION
In the world of Kubernetes, managed services like GKE, EKS, and AKS often steal the spotlight. But for those of us who love getting our hands dirty and understanding the nuts and bolts of Kubernetes, Kubeadm offers the perfect playground. While there are many tools out there for on-prem kubernetes deployment like KIND, K3s, Minikube these aren’t use in production. So today I will be guiding you on deploying a application by following the best practices possible. If You have watched a lot of tutorials on Kubernetes but want to know how all of it works together stay tuned !! and by the end of the blog you will learn a lot. However I would like to highlight that whatever I will be doing on this tutorial can also be achieved on EKS,GKE but i will still suggest u to use kubeadm to avoid getting billed. I assume that u already have a kubeadm cluster ready if not u can easily create one there are plenty of tutorials available for the same
What will be deploying
We will be deploying a 3 tier web app and i will also show u how to expose the same to the Internet. The web app has a React frontend, Node Backend and uses mongo to store data i.e (MERN app).
So Fork the Repository https://github.com/skymonil/CAAM.git and be ready
Deploying Mongodb
We will first create a Mongodb Stateful Set and expose it using a headless service. A StatefulSet (STS) is a Kubernetes controller designed for managing stateful applications, which require stable network identities, persistent storage, and ordered deployment or scaling. Unlike Deployments, which are ideal for stateless apps, StatefulSets maintain a unique identity for each pod (like mongodb-0
, mongodb-1
, etc.), ensuring consistency across restarts.
This makes StatefulSets a perfect fit for databases like MongoDB, where each replica node must maintain its own data and identity over time. By using a Headless Service, we enable each MongoDB pod to be directly addressable within the cluster critical for replication and peer communication. We will be using the local path provisioner in order to persist the data, however you can also create a NFS server for the same.
so First we will install local-path provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
We will then deploy the mongo stateful set and the headless service
kubectl apply -f mongo-statefulset.yml mongo-headless-service.yml
However, a better approach would be to use the MongoDB Kubernetes Operator, especially for production-grade deployments. While manually creating a StatefulSet gives you control and insight into how MongoDB runs on Kubernetes, it also requires significant manual effort in terms of configuration, scaling, monitoring, and handling failovers. This can be complex and error-prone, especially when dealing with replica sets, persistent storage, security (TLS, authentication), and backup strategies. The MongoDB Kubernetes Operator simplifies and automates much of this complexity. Developed and maintained by MongoDB Inc., the operator:
Automatically provisions and configures replica sets
Manages seamless scaling of MongoDB nodes
Handles automated failover and recovery
Supports TLS/SSL encryption, authentication, and access control
Enables automated backups and monitoring integration
Provides custom resource definitions (CRDs) that allow you to declaratively define MongoDB clusters in YAML
I tried to implement the same but faced Error regarding the requirement of AVX needed for mongo 5.0+ so hence I decided to use Mongo 4.4 since AVX support is mandatory for Mongo 5.0+. However if u are using Proxmox or some other Hypervisor you should not be facing this issue but in VirtualBox I have faced the issue. Most modern processors support AVX however you can find out if your VM’s support AVX or not by using the below command
lscpu | grep -o "avx[^ ]*"
Now we have created 3 replicas of the mongodb pod but how will each and every pod be in sync if you have worked on distributed systems then you would know what I am talking about. But let me explain Assume if you have created 3 replicas and each replica has it’s own Node where it is storing it’s data So assume if mongo-1 Pod is on Node 1 and a new user creation request was sent to this Pod and It saved the data on it’s Node for persistence but how will mongo-2 which is on Node 2 will know about this user and hence will be using a kubernetes job that will leverage the Master-Slave architecture and appoint a pod as master and others as slave all writes will go to master and the reads can go to both the master and the slave and hence we can solve the problem of Data inconsistency.
kubectl apply -f mongo-init-job.yml
So now we are done with deploying the DB.
We also have to seed the DB with some data. So run a migration job
kubectl apply -f migration-job.yaml
Deploying the backend
Next, we'll deploy the backend service. This service requires sensitive environment variables such as database credentials or API keys. Instead of hardcoding these values or storing plain Kubernetes Secrets in Git which is insecure we'll use Bitnami SealedSecrets to securely manage and store secrets in version control.
so What are SealedSecrets? SealedSecrets are an extension of Kubernetes Secrets developed by Bitnami. They allow you to encrypt secrets into a “sealed” format, which is safe to store in Git. These sealed secrets can only be decrypted by the SealedSecrets controller running in your cluster, ensuring that your sensitive data remains secure
installation of Bitnami
on you cluster execute the below
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.29.0/controller.yaml
install kubeseal CLI
for Linux x86_64
curl -OL "https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.29.0/kubeseal-0.29.0-linux-amd64.tar.gz"
tar -xvzf kubeseal-0.29.0-linux-amd64.tar.gz kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
you may also follow the documentation https://github.com/bitnami-labs/sealed-secrets/releases
Create the below secrets
mongo-uri-secret
kubectl create secret generic mongo-uri-secret \
--from-literal=MONGODB_URI="mongodb://mongo-0.mongo.default.svc.cluster.local:27017,mongo-1.mongo.default.svc.cluster.local:27017,mongo-2.mongo.default.svc.cluster.local:27017/mydb?replicaSet=rs0&readPreference=secondaryPreferred" \
--dry-run=client -o yaml > mongo-secret.yaml
our backend is using NodeMailer to send emails for registration so we will require a google app password for the same if you don’t know how to generate one just see on youtube.
kubectl create secret generic email-secret \
--from-literal=EMAIL_ID="youremailid" \
--from-literal=EMAIL_PASSWORD="yourapppassword" \
--from-literal=SECRET_KEY="JWT_SECRET" \
--dry-run=client -o yaml > email-secret.yaml
Now we will encrypt the secrets so that we can store them in Github.
kubectl seal --controller-name=sealed-secrets-controller -o yaml < mongo-secret.yaml > mongo-sealed-secret.yamlnow we are ready to deploy the backend service
kubectl seal --controller-name=sealed-secrets-controller -o yaml < email-secret.yaml > email-sealed-secret.yaml
we will now deploy the backend deployment. We will be using kustomize in my future blog so if you want the frontend and backend deploy to work just change the image version to v1
kubectl apply -f backend.yml
Deploying The frontend
kubectl apply -f frontend.yaml
So now the frontend, backend and DB are deployed we will now expose the same using an ingress but i plan to expose it online so we will use metalLb. MetalLB is a network load-balancer implementation for bare-metal Kubernetes clusters. Unlike managed cloud Kubernetes (like EKS, GKE, or AKS) where LoadBalancer services are backed by the cloud provider's infrastructure, bare-metal setups don't have native support for LoadBalancer services. That's where MetalLB comes in, it bridges the gap. In Kubernetes, when you create a Service of type LoadBalancer
, it typically assigns a public IP (in cloud). But in bare metal environments (home labs, on-prem servers, or VMs), there's no cloud provider to provision a load balancer. MetalLB steps in to:
Assign an external IP to your Kubernetes services.
Handle traffic distribution to pods running across nodes.
Add helm repo
helm repo add metallb https://metallb.github.io/metallb
helm repo update
Create namespace for the deployment
kubectl create namespace metallb-system kubectl config set-context --current --namespace metallb-system
Deploy the chart
helm install metallb metallb/metallb
At this point there will be one Deployment and one DaemonSet created. The pods of the DaemonSet will not be running until we complete the configuration.
metalbLb Config
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: my-ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.56.240-192.168.56.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2
namespace: metallb-system
These IPs must be on the same subnet as your local network.
Make sure they are not being used by your router DHCP pool.
Install the nginx-ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.3/deploy/static/provider/baremetal/deploy.yaml
kubectl get svc -n ingress-nginx
if the service is of type LoadBalancer it will expose an external IP. We will now use these external IP and combine it with a tunneling service like ngrok
curl -sSL https://ngrok-agent.s3.amazonaws.com/ngrok.asc \
| sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \
&& echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \
| sudo tee /etc/apt/sources.list.d/ngrok.list \
&& sudo apt update \
&& sudo apt install ngrok
Create an account with ngrok and get the token
ngrok config add-authtoken <your-ngrok-token>
root@Ubuntu:/home/sky/CAAM-K8s# k get ing
EXPECTED OUTPUT
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress nginx 1d83-2405-201-4f-a9e3-a00-27ff-fe74-14e0.ngrok-free.app 192.168.29.240 80 36
however hosts will not be set yet. Take note of the external IP address used by the ingress
ngrok http 192.168.29.240 > /tmp/ngrok.log 2>&1 &
sleep 5 # Wait for ngrok to initialize
curl -s http://localhost:4040/api/tunnels | jq -r '.tunnels[0].public_url'
now set this in hosts in ingress.yaml as as show below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: default
spec:
ingressClassName: nginx # Specify the Ingress class
rules:
- host: b5bc-2405-201-4f-a9e3-a00-27ff-fe74-14e0.ngrok-free.app
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend # Your backend service name
port:
number: 5000 # Port of the backend service
- path: /github-webhook
pathType: Prefix
backend:
service:
name: jenkins-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend # Your frontend service name
port:
number: 80
And Wohoo you have done it the application is now live and accessible over the internet in future blogs where I will be creating a jenkins DevSecOps pipeline. We will be integrating nexus,argocd, SAST and DAST tools in our pipeline so Stay Tuned for future blogs !!!
Subscribe to my newsletter
Read articles from Monil Parikh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
