From Code to Kubernetes: A Real-World Journey of Deploying a Node.js App on Minikube

Table of contents
- Introduction
- The Application: Simple Node.js Todo App
- Phase 1: Containerizing the Application
- Phase 2: Kubernetes Configuration
- Phase 3: The Deployment Journey
- Challenge 1: Namespace Confusion
- Challenge 2: The Mysterious Internal Server Error
- Challenge 3: Health Check Failures
- Challenge 4: Image Updates and Rollouts
- The Final Working Solution
- Key Lessons Learned
- Best Practices Discovered
- Tools and Commands That Saved the Day
- Conclusion
- Resources and References

A detailed account of deploying a simple Node.js application to Kubernetes, including all the challenges, mistakes, and lessons learned along the way.
Introduction
Recently, I embarked on a journey to deploy a simple Node.js todo application to Kubernetes using minikube. What seemed like a straightforward task turned into an educational adventure filled with debugging sessions, configuration tweaks, and valuable lessons about containerization and orchestration. This blog chronicles the entire process, including the challenges faced and how they were overcome.
The Application: Simple Node.js Todo App
The application I chose to deploy was a simple Node.js todo app with basic CRUD operations and a web interface.
Phase 1: Containerizing the Application
The Docker Setup
First, I created a production-ready Dockerfile:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodeuser -u 1001
COPY --chown=nodeuser:nodejs . .
RUN mkdir -p /app/logs && \
chown -R nodeuser:nodejs /app
USER nodeuser
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (res) => { process.exit(res.statusCode === 200 ? 0 : 1) })"
CMD ["npm", "start"]
Key decisions made:
Used Alpine Linux for smaller image size
Implemented security best practices with non-root user
Added health checks for container monitoring
Optimized layer caching with separate package.json copy
Building and Testing Locally
# Build the image
docker build -t akpadetsi/simple-node-app:latest .
# Test locally
docker run -p 3000:3000 akpadetsi/simple-node-app:latest
The local Docker deployment worked flawlessly, giving me confidence to proceed to Kubernetes.
Phase 2: Kubernetes Configuration
The Kubernetes Manifests
I created a comprehensive set of Kubernetes manifests:
Namespace - Isolated environment
ConfigMap - Environment variables
Deployment - Application pods
Services - Both ClusterIP and NodePort
Ingress - External access
HPA - Horizontal Pod Autoscaler
Here's the deployment configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-node-app
namespace: simple-node-app
spec:
replicas: 3
selector:
matchLabels:
app: simple-node-app
template:
spec:
containers:
- name: simple-node-app
image: akpadetsi/simple-node-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: simple-node-app-config
key: NODE_ENV
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
readinessProbe:
httpGet:
path: /health
port: 3000
Phase 3: The Deployment Journey
Setting Up Minikube
# Start minikube with adequate resources
minikube start --driver=docker --memory=4096 --cpus=2
# Enable required addons
minikube addons enable ingress
minikube addons enable metrics-server
Initial Deployment
# Push image to Docker Hub
docker push akpadetsi/simple-node-app:latest
# Deploy to Kubernetes
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
Everything seemed to deploy successfully:
$ kubectl get pods -n simple-node-app
NAME READY STATUS RESTARTS AGE
simple-node-app-5f56f947d5-5zvzn 1/1 Running 0 97s
simple-node-app-5f56f947d5-gxcww 1/1 Running 0 97s
simple-node-app-5f56f947d5-rxk7t 1/1 Running 0 97s
Challenge 1: Namespace Confusion
The Problem
When I tried to access the service, I got this error:
$ minikube service simple-node-app-nodeport
โ Exiting due to SVC_NOT_FOUND: Service 'simple-node-app-nodeport' was not found in 'default' namespace.
The Solution
The issue was that I forgot to specify the namespace. My resources were in the simple-node-app
namespace, not the default one.
# Wrong
minikube service simple-node-app-nodeport
# Correct
minikube service simple-node-app-nodeport -n simple-node-app
Lesson Learned: Always specify the namespace when working with custom namespaces. Consider setting the default namespace context:
kubectl config set-context --current --namespace=simple-node-app
Challenge 2: The Mysterious Internal Server Error
The Problem
After successfully accessing the service, I was greeted with:
{"success":false,"message":"Something went wrong!","error":"Internal server error"}
The pods showed as healthy, but the application was clearly not working correctly.
The Investigation
I started debugging by checking the application logs:
kubectl logs -l app=simple-node-app -n simple-node-app -f
The logs revealed the real issue:
Error: ENOENT: no such file or directory, stat '/app/public/index.html'
Error: ENOENT: no such file or directory, stat '/app/public/index.html'
Error: ENOENT: no such file or directory, stat '/app/public/index.html'
The Root Cause
The problem was in my .dockerignore
file:
# Gatsby files
.cache/
public # <-- This was excluding my entire public directory!
My Node.js application was trying to serve static files from the public
directory, but this directory was being excluded during the Docker build process.
The Solution
I had two options:
Option 1: Fix the .dockerignore (Recommended)
# Gatsby files
.cache/
# public # Commented out - we need this directory for our static files
Option 2: Add fallback handling in server.js
// Catch-all route for SPA
app.get('*', (req, res) => {
const indexPath = path.join(__dirname, 'public', 'index.html');
if (require('fs').existsSync(indexPath)) {
res.sendFile(indexPath);
} else {
// Fallback response if static files don't exist
res.json({
success: true,
message: 'Simple Node.js App is running!',
endpoints: {
health: '/health',
info: '/api/info',
todos: '/api/todos'
}
});
}
});
I implemented both solutions for robustness.
Lesson Learned: Always review your .dockerignore
file carefully. What you exclude can break your application in unexpected ways.
Challenge 3: Health Check Failures
The Problem
Initially, my health checks were failing because the application didn't have a /health
endpoint, but my Kubernetes deployment was configured to check it:
livenessProbe:
httpGet:
path: /health # This endpoint didn't exist!
port: 3000
The Solution
I had two options:
Add the
/health
endpoint to my application (which I had already done)Change the health check to use an existing endpoint like
/
I temporarily removed the health checks to get the application working, then added them back once the main issue was resolved:
# Temporarily remove health checks
kubectl edit deployment simple-node-app -n simple-node-app
Lesson Learned: Ensure your health check endpoints exist and return appropriate status codes before configuring Kubernetes probes.
The /health endpoint responding correctly
Challenge 4: Image Updates and Rollouts
The Problem
After fixing the code and rebuilding the Docker image, the pods were still running the old version.
The Solution
I learned about Kubernetes rollout management:
# Rebuild and push the image
docker build -t akpadetsi/simple-node-app:latest .
docker push akpadetsi/simple-node-app:latest
# Force a rolling restart to pull the new image
kubectl rollout restart deployment simple-node-app -n simple-node-app
# Monitor the rollout progress
kubectl rollout status deployment simple-node-app -n simple-node-app
Lesson Learned: With imagePullPolicy: Always
, you can force pod restarts to pull updated images without changing the deployment configuration.
The /api/info endpoint showing application details
The /api/todos endpoint returning the todo list
The Final Working Solution
After overcoming all challenges, here's what the final working setup looked like:
Successful Deployment
$ kubectl get all -n simple-node-app
NAME READY STATUS RESTARTS AGE
pod/simple-node-app-5f56f947d5-5zvzn 1/1 Running 0 14m
pod/simple-node-app-5f56f947d5-gxcww 1/1 Running 0 14m
pod/simple-node-app-5f56f947d5-rxk7t 1/1 Running 0 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/simple-node-app-nodeport NodePort 10.104.129.172 <none> 80:30080/TCP 12m
service/simple-node-app-service ClusterIP 10.99.7.157 <none> 80/TCP 12m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/simple-node-app 3/3 3 3 14m
Accessing the Application
$ minikube service simple-node-app-nodeport -n simple-node-app
|-----------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------------|--------------------------|-------------|---------------------------|
| simple-node-app | simple-node-app-nodeport | http/80 | http://192.168.49.2:30080 |
|-----------------|--------------------------|-------------|---------------------------|
๐ Opening service simple-node-app/simple-node-app-nodeport in default browser...
The application was now fully functional with:
โ Working frontend interface
โ Functional API endpoints (
/health
,/api/info
,/api/todos
)โ Proper error handling
โ Zero-downtime deployments
The application homepage showing it's running successfully
The full application interface with todo functionality
Key Lessons Learned
1. Container vs. Kubernetes Issues
Not all deployment problems are Kubernetes-related. In my case, the main issue was with the Docker image configuration (.dockerignore
), not Kubernetes itself.
2. Debugging Strategy
Follow a systematic debugging approach:
Check pod status:
kubectl get pods
Examine logs:
kubectl logs
Test connectivity:
kubectl exec
and test endpointsVerify configuration:
kubectl describe
3. Namespace Management
Always be explicit about namespaces:
Use
-n namespace
flag consistentlyConsider setting default namespace context
Remember that services in different namespaces can't communicate easily
4. Image Management
Use
imagePullPolicy: Always
for developmentTag images properly for production
Use
kubectl rollout restart
for quick updatesMonitor rollout status to ensure successful deployments
5. Health Checks Matter
Implement proper health check endpoints in your application
Start with basic health checks, then add more sophisticated ones
Health checks help Kubernetes make better decisions about pod lifecycle
Best Practices Discovered
Docker Best Practices
# Use specific base image versions
FROM node:18.17.0-alpine
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs && adduser -S nodeuser -u 1001
# Optimize layer caching
COPY package*.json ./
RUN npm ci --only=production
# Set proper ownership
COPY --chown=nodeuser:nodejs . .
USER nodeuser
Kubernetes Best Practices
# Always set resource limits
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
# Use proper security context
securityContext:
runAsNonRoot: true
runAsUser: 1001
allowPrivilegeEscalation: false
Operational Best Practices
# Use labels for better organization
kubectl get pods -l app=simple-node-app
# Monitor deployments
kubectl rollout status deployment/simple-node-app
# Keep rollout history
kubectl rollout history deployment/simple-node-app
Tools and Commands That Saved the Day
Essential Debugging Commands
# Check everything in a namespace
kubectl get all -n simple-node-app
# Follow logs in real-time
kubectl logs -f deployment/simple-node-app -n simple-node-app
# Get a shell inside a pod
kubectl exec -it pod-name -n simple-node-app -- /bin/sh
# Test service connectivity
kubectl run test-pod --image=busybox -it --rm -- sh
# Port forward for local testing
kubectl port-forward svc/simple-node-app-service 3000:80 -n simple-node-app
Useful Minikube Commands
# Get minikube IP
minikube ip
# Access services easily
minikube service simple-node-app-nodeport -n simple-node-app
# Load local images
minikube image load simple-node-app:latest
# View dashboard
minikube dashboard
Conclusion
Deploying a simple Node.js application to Kubernetes taught me that "simple" doesn't always mean "easy." The journey involved:
Docker configuration challenges -
.dockerignore
excluding necessary filesKubernetes networking concepts - Understanding namespaces and services
Debugging methodologies - Systematic approach to troubleshooting
Deployment strategies - Rolling updates and image management
The most valuable lesson was that many "Kubernetes problems" are actually application or container configuration issues. By methodically debugging each layer (application โ container โ Kubernetes), I was able to identify and resolve each challenge.
This experience reinforced the importance of:
Understanding your application dependencies (static files, endpoints, etc.)
Proper container configuration (what to include/exclude)
Kubernetes fundamentals (namespaces, services, deployments)
Systematic debugging (logs, connectivity tests, configuration verification)
The final result was a robust, scalable deployment running on Kubernetes with proper health checks, resource management, and zero-downtime update capabilities.
For anyone embarking on a similar journey, remember: every error is a learning opportunity, and the debugging process often teaches you more than when everything works perfectly the first time.
Resources and References
This blog post documents a real deployment experience, including all the mistakes and learning moments. The goal is to help others avoid similar pitfalls and understand the debugging process when things don't go as planned.
Subscribe to my newsletter
Read articles from Enoch directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Enoch
Enoch
I have a passion for automating and optimizing cloud infrastructure. I have experience working with various cloud platforms, including AWS, Azure, and Google Cloud. My goal is to help companies achieve scalable, reliable, and secure cloud environments that drive business success.