From Local Docker to AWS EKS: Building a Production-Grade MongoDB Architecture for FeedbackHub

Table of contents

Ever pushed your local app to production and watched it fail to connect to the database? I have — and here’s how I fixed it.
Introduction
Building applications that work seamlessly across different environments (local development vs. production) is a common challenge in modern software development. In this article, I'll share our journey of transforming FeedbackHub from a simple local application to a production-grade system that elegantly handles both local Docker MongoDB and AWS MongoDB Atlas with proper environment detection.
The Challenge
We were building FeedbackHub, a Next.js application that needed to:
Work locally with Docker and MongoDB for development
Deploy to AWS EKS with MongoDB Atlas for production
Handle environment-specific configurations automatically
Maintain security best practices (no hardcoded secrets)
Initial Architecture (The Problem)
What We Started With
// app/lib/mongodb.ts - The problematic approach
import { MongoClient } from 'mongodb'
const mongoUri = process.env.MONGODB_URI || 'mongodb://localhost:27017/feedbackhub'
const client = new MongoClient(mongoUri)
export async function getDb() {
await client.connect()
return client.db()
}
This hardcoded fallback URI caused multiple issues in both Docker and production.
::: warning: Why This Failed
Hardcoded fallbacks don’t work in Docker.
No environment detection.
Secrets in code or env files.
Build-time connection attempts.
:::
The Breaking Point
Local Development Failures
❌ MongoDB connection failed: MongoServerSelectionError:
getaddrinfo ENOTFOUND localhost
Root Cause: Docker containers can't resolve localhost
.
Production Deployment Issues
❌ Module not found: Can't resolve '@aws-sdk/client-secrets-manager'
Root Cause: Local builds tried to bundle AWS SDK unnecessarily.
The Solution: Environment-Aware Architecture
Step 1: Modular MongoDB Implementation
// app/lib/mongodb.ts - Smart Router
const isRunningOnAWS = process.env.AWS_ROLE_ARN &&
process.env.AWS_WEB_IDENTITY_TOKEN_FILE &&
process.env.NODE_ENV === 'production'
async function getRuntimeDb() {
if (isRunningOnAWS) {
const awsModule = await import('./mongodb.aws')
return awsModule.getDb()
} else {
const localModule = await import('./mongodb.local')
return localModule.getDb()
}
}
This router dynamically chooses between AWS and local database modules.
// app/lib/mongodb.local.ts
const mongoUri = process.env.MONGODB_URI || 'mongodb://host.docker.internal:27017/feedbackhub'
// app/lib/mongodb.aws.ts
const awsSecretsModule = await import('./aws-secrets')
const mongoUri = await awsSecretsModule.getMongoDBUri()
Step 2: Environment Detection Strategy
const isBuildTime = typeof window === 'undefined' &&
process.env.NODE_ENV === 'production' &&
process.env.NEXT_PHASE === 'phase-production-build' &&
!process.env.AWS_ROLE_ARN
Step 3: Docker Configuration
services:
app:
environment:
- NODE_ENV=development
- MONGODB_URI=mongodb://host.docker.internal:27017/feedbackhub
volumes:
- ../app:/app/app:delegated
Step 4: AWS Secrets Manager Integration
export async function getMongoDBUri(): Promise<string> {
const secretName = process.env.FEEDBACKHUB_SECRET_NAME
const secretString = await getSecret(secretName)
const secretData = JSON.parse(secretString)
return secretData.MONGODB_URI
}
:::tip Security Best Practice
Never hardcode secrets. Use AWS Secrets Manager with IRSA for production environments.
:::
The Build Process: Environment-Agnostic Images
# Build without secrets
docker build -f docker/Dockerfile.prod -t feedbackhub:latest .
# Push to ECR
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/feedbackhub:latest
# Deploy to EKS
kubectl apply -f k8s/deployment.yaml
Runtime vs Build Time
// ❌ WRONG
const mongoUri = 'mongodb+srv://user:pass@cluster.mongodb.net/db'
// ✅ CORRECT
const mongoUri = await getMongoDBUri()
Architecture Evolution
Before:
┌─────────────────┐
│ Next.js App │
│ Single MongoDB │
│ Hardcoded URI │
└─────────────────┘
After:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Next.js App │ │ Environment │ │ MongoDB │
│ Smart Router │───▶│ Detection │───▶│ Strategy │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Deployment Process
Local:
./scripts/docker-dev.sh start
AWS:
./scripts/build-and-push-ecr.sh
cd terraform/k8s/feedbackhub_app
terraform apply
kubectl -n feedbackhub port-forward svc/feedbackhub-svc 8080:3000
Troubleshooting Journey
Dependency Issue:
docker-compose -f docker/docker-compose.dev.yml down -v
docker-compose -f docker/docker-compose.dev.yml build --no-cache
docker-compose -f docker/docker-compose.dev.yml up -d
Port Forwarding:
ps aux | grep "kubectl port-forward"
kubectl -n feedbackhub port-forward svc/feedbackhub-svc 8080:3000
Lessons Learned
Use multiple indicators for environment detection.
Understand Docker networking.
Prevent runtime DB connections during build.
Never hardcode secrets.
Rebuild images with
--no-cache
when troubleshooting.
Current Status
✅ Local dev works
✅ AWS prod works
✅ Automatic environment detection
✅ IRSA secret access
✅ Scalable and production-ready
Conclusion
This architecture enables FeedbackHub to transition seamlessly between environments while staying secure, portable, and maintainable.
What about you? How do you handle multi-environment database configurations in your projects? Share in the comments!
Subscribe to my newsletter
Read articles from Deepak Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
