Complete Guide to Building Microservices with Node.js, TypeScript & Docker: From Monolith to Production

Table of contents
- Understanding Microservices: Why Break Apart Your Monolith?
- Project Structure: Monorepo vs Polyrepo Decision
- Monorepo Approach (What We'll Use)
- Understanding the Shared Package
- Inter-Service Communication: The Heart of Microservices
- 1. Synchronous Communication (HTTP/REST)
- 2. Asynchronous Communication (Message Queues)
- Implementing the Message Queue System
- Publisher (Auth Service)
- Subscriber (User Service)
- API Gateway: The Front Door Pattern
- 1. Cross-Cutting Concerns
- 2. Service Discovery
- 3. Load Balancing (Advanced)
- Docker: Containerization Strategy
- Understanding the Dockerfile
- Docker Compose: Orchestration Magic
- Error Handling: Building Resilient Systems
- Circuit Breaker Pattern
- Graceful Degradation
- Configuration Management: Environment-Aware Services
- Monitoring and Observability: Seeing Inside Your System
- Request Tracing
- Health Checks and Metrics
- Testing Strategies for Microservices
- 1. Unit Tests (Service Level)
- 2. Integration Tests (Service Communication)
- 3. Contract Tests (API Compatibility)
- Deployment and Scaling Strategies
- Development Workflow
- Production Considerations
- Common Pitfalls and How to Avoid Them
- 1. The Distributed Monolith Anti-Pattern
- 2. Chatty Interface Anti-Pattern
- 3. Data Consistency Challenges
- Performance Optimization Strategies
- 1. Connection Pooling
- 2. Caching Strategy
- 3. Database Per Service Pattern
- Security in Microservices
- 1. Service-to-Service Authentication
- 2. API Gateway Security
- Testing Your Implementation
- 1. Start the Services
- 2. Test User Registration Flow
- 3. Test Authentication Flow
- 4. Test Service Communication
- Production Readiness Checklist
- Infrastructure
- Security
- Reliability
- Observability
- Conclusion: The Microservices Journey
Microservices architecture has become the gold standard for building scalable, maintainable applications. If you're coming from a monolithic background, the transition can feel overwhelming. This guide breaks down not just the "how" but the "why" behind every architectural decision, helping you truly understand microservices rather than just copying code.
Understanding Microservices: Why Break Apart Your Monolith?
Before diving into implementation, let's understand what problems microservices solve:
The Monolith Problem:
Issues with Monoliths:
Single Point of Failure: If auth breaks, everything breaks
Technology Lock-in: Entire app must use same tech stack
Scaling Inefficiency: Must scale entire app even if only auth needs more resources
Development Bottlenecks: Teams step on each other's toes
The Microservices Solution:
Benefits:
Independent Deployment: Update auth without touching user service
Technology Diversity: Use Python for ML, Node.js for APIs, Go for performance-critical services
Granular Scaling: Scale only what needs scaling
Team Autonomy: Each team owns their service end-to-end
Project Structure: Monorepo vs Polyrepo Decision
The first architectural decision is how to organize your code. Let's understand both approaches:
Monorepo Approach (What We'll Use)
microservices-platform/
├── packages/
│ ├── shared/ # Common code across services
│ ├── gateway/ # Entry point for all requests
│ ├── auth-service/ # Handles authentication
│ └── user-service/ # Manages user data
├── docker-compose.yml # Orchestrates all services
└── package.json # Workspace configuration
Why Monorepo?
Shared Code Management: Common types, utilities in one place
Atomic Commits: Change interface? Update all consumers in one commit
Simplified CI/CD: One pipeline can test service interactions
Developer Experience: Single
git clone
, consistent tooling
When to Choose Polyrepo:
Large teams (50+ developers)
Different release cycles per service
Strong service ownership boundaries
Understanding the Shared Package
The shared
package is crucial for type safety across services:
// packages/shared/src/types/index.ts
export interface User {
id: string;
email: string;
username: string;
createdAt: Date;
updatedAt: Date;
}
// This interface ensures ALL services speak the same language
export interface ServiceResponse<T = any> {
success: boolean; // Consistent response format
data?: T; // Type-safe payload
error?: string; // Standardized error handling
message?: string; // Human-readable messages
}
Why This Matters:
Without shared types, Service A might return { status: "ok", payload: {...} }
while Service B returns { success: true, data: {...} }
. Clients would need different handling logic for each service—a maintenance nightmare.
Inter-Service Communication: The Heart of Microservices
Microservices communicate in two fundamental ways. Understanding when to use each is critical:
1. Synchronous Communication (HTTP/REST)
When to Use:
Real-time operations requiring immediate response
Data queries that client needs right now
Operations that must complete before proceeding
Example: Authentication Check
// user-service needs to verify a token with auth-service
const authMiddleware = (authClient: HttpClient) => {
return async (req: AuthenticatedRequest, res: Response, next: NextFunction) => {
try {
const token = req.headers.authorization?.substring(7);
// SYNCHRONOUS call - we MUST know if user is authenticated
// before allowing access to user data
const response = await authClient.post<ServiceResponse<AuthPayload>>('/auth/verify', { token });
if (!response.success) {
return res.status(401).json({ success: false, error: 'Invalid token' });
}
req.user = response.data;
next(); // Proceed only after successful authentication
} catch (error) {
// Handle network failures, timeouts, etc.
res.status(401).json({ success: false, error: 'Authentication failed' });
}
};
};
The Problem with Synchronous Calls:
Client → Gateway → User Service → Auth Service
↓
[Auth Service Down]
↓
Entire Request Fails
Solutions We Implement:
Circuit Breaker Pattern (in HttpClient)
Timeout Handling (5-second timeout)
Retry Logic (with exponential backoff)
2. Asynchronous Communication (Message Queues)
When to Use:
Event notifications that don't need immediate response
Operations that can happen "eventually"
Decoupling services for better resilience
Example: User Registration Flow
// auth-service creates user account
app.post('/auth/register', async (req, res) => {
// 1. Create user in auth database
const user = await createUser(userData);
// 2. Respond immediately to client
res.status(201).json({ success: true, data: { token, user } });
// 3. Notify other services asynchronously
await publisher.publish(EventTypes.USER_CREATED, {
userId: user.id,
email: user.email,
username: user.username
});
// ↑ This happens after response, won't block user experience
});
Why This Works Better:
Client → Auth Service → Response (Fast!)
↓
[Background]
↓
Message Queue → User Service
→ Email Service
→ Analytics Service
If the user service is down, the message stays in the queue. When it comes back up, it processes all missed events. The user gets their account immediately, and profile creation happens behind the scenes.
Implementing the Message Queue System
Let's break down the Redis pub/sub implementation:
Publisher (Auth Service)
export class RedisPublisher {
private redis: Redis;
constructor() {
this.redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
retryDelayOnFailover: 100, // Retry connection after 100ms
maxRetriesPerRequest: 3, // Give up after 3 attempts
});
}
async publish(eventType: EventTypes, payload: any): Promise<void> {
const message: QueueMessage = {
type: eventType, // What happened?
payload, // Event data
timestamp: new Date(), // When did it happen?
correlationId: `${eventType}_${Date.now()}_${Math.random()}` // Trace this event
};
// Redis pub/sub: fire-and-forget messaging
await this.redis.publish('microservices_events', JSON.stringify(message));
}
}
Understanding Correlation ID:
When debugging distributed systems, you need to trace events across services. A correlation ID lets you follow a user registration from auth-service → user-service → email-service → analytics-service.
Subscriber (User Service)
export class RedisSubscriber extends EventEmitter {
async start(): Promise<void> {
// Subscribe to the events channel
await this.redis.subscribe('microservices_events');
this.redis.on('message', (channel, message) => {
const queueMessage: QueueMessage = JSON.parse(message);
// Emit as Node.js event for local handling
this.emit(queueMessage.type, queueMessage.payload);
});
}
}
// Usage in user service
subscriber.on('user.created', (payload) => {
// Handle user creation event
const profile: User = {
id: payload.userId,
email: payload.email,
username: payload.username,
createdAt: new Date(),
updatedAt: new Date()
};
userProfiles.set(payload.userId, profile);
});
Event-Driven Architecture Benefits:
Loose Coupling: Auth service doesn't know about user service
Resilience: If user service is down, events queue up
Scalability: Add new services without changing existing ones
Auditability: Every event is logged with timestamp and correlation ID
API Gateway: The Front Door Pattern
The API Gateway acts as a reverse proxy, routing requests to appropriate services:
// Instead of clients calling services directly:
// Client → Auth Service (port 3001)
// Client → User Service (port 3002)
// Client → Order Service (port 3003)
// Gateway provides single entry point:
// Client → Gateway (port 3000) → Internal Services
Why Use an API Gateway?
1. Cross-Cutting Concerns
// Applied to ALL services automatically
app.use(helmet()); // Security headers
app.use(cors({ origin: allowedOrigins })); // CORS policy
app.use(limiter); // Rate limiting
Without gateway, you'd need to implement these in every service.
2. Service Discovery
// Gateway knows where services live
app.use('/api/auth', createProxyMiddleware({
target: 'http://auth-service:3001', // Docker service name
changeOrigin: true,
pathRewrite: (path) => path.replace('/api/auth', '/auth'),
}));
Clients don't need to know internal service URLs or ports.
3. Load Balancing (Advanced)
// Gateway can distribute load across multiple instances
const authServiceUrls = [
'http://auth-service-1:3001',
'http://auth-service-2:3001',
'http://auth-service-3:3001'
];
// Round-robin or health-based routing
Docker: Containerization Strategy
Each service gets its own container for isolation and portability:
Understanding the Dockerfile
FROM node:18-alpine # Lightweight Linux with Node.js
WORKDIR /app # Set working directory
# Copy dependency files first (Docker layer caching optimization)
COPY package*.json ./
COPY ../../packages/shared /app/shared
# Install dependencies (cached if package.json unchanged)
RUN npm ci --only=production
# Copy source code (invalidates cache only when code changes)
COPY src ./src
COPY tsconfig.json ./
# Build TypeScript to JavaScript
RUN npm run build
EXPOSE 3001 # Document which port service uses
# Health check - Docker can restart unhealthy containers
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3001/health', (res) => { process.exit(res.statusCode === 200 ? 0 : 1) })"
CMD ["npm", "start"] # Start the service
Layer Caching Optimization:
Docker builds in layers. By copying package.json
first, we can reuse the npm install
layer when only source code changes, speeding up builds significantly.
Docker Compose: Orchestration Magic
version: '3.8'
services:
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"] # Redis-specific health check
interval: 10s
timeout: 3s
retries: 5
networks:
- microservices_network # Isolated network
auth-service:
build:
context: ./packages/auth-service
environment:
- REDIS_HOST=redis # Service discovery via service names
- DATABASE_URL=postgresql://admin:admin123@postgres:5432/microservices
depends_on:
redis:
condition: service_healthy # Wait for Redis to be ready
postgres:
condition: service_healthy
networks:
- microservices_network
Service Dependencies Explained:
┌─────────────┐
│ Gateway │ ← Entry point
└─────────────┘
│
▼
┌─────────────┐ ┌─────────────┐
│Auth Service │ │User Service │ ← Application services
└─────────────┘ └─────────────┘
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ Redis │ │ PostgreSQL │ ← Infrastructure services
└─────────────┘ └─────────────┘
Services start in dependency order: Infrastructure → Application → Gateway.
Error Handling: Building Resilient Systems
Microservices fail. Networks are unreliable. Databases go down. Your code must handle this gracefully:
Circuit Breaker Pattern
class HttpClient {
private failures = 0;
private lastFailureTime = 0;
private readonly maxFailures = 5;
private readonly resetTimeout = 60000; // 1 minute
async get<T>(url: string): Promise<T> {
// Check if circuit is open (too many recent failures)
if (this.isCircuitOpen()) {
throw new Error('Circuit breaker is open - service unavailable');
}
try {
const response = await this.client.get(url);
this.onSuccess(); // Reset failure count
return response.data;
} catch (error) {
this.onFailure(); // Increment failure count
throw error;
}
}
private isCircuitOpen(): boolean {
if (this.failures >= this.maxFailures) {
// Circuit opened, check if enough time passed to try again
if (Date.now() - this.lastFailureTime > this.resetTimeout) {
this.failures = 0; // Reset circuit
return false;
}
return true; // Keep circuit open
}
return false;
}
}
Why This Matters:
Without circuit breakers, a failing service can cascade failures:
Auth Service Down → User Service keeps trying → User Service overwhelmed → Gateway timeouts → Client errors
With circuit breakers:
Auth Service Down → Circuit opens after 5 failures → User Service fails fast → Client gets immediate error response
Graceful Degradation
// Instead of failing completely, provide limited functionality
app.get('/users/profile/:userId', async (req, res) => {
try {
// Try to get fresh data from auth service
const authResponse = await authClient.post('/auth/verify', { token });
// ... normal flow
} catch (authError) {
// Auth service down - check local cache or provide limited access
console.warn('Auth service unavailable, using cached data');
const cachedProfile = cache.get(userId);
if (cachedProfile) {
return res.json({
success: true,
data: cachedProfile,
warning: 'Using cached data - some features may be limited'
});
}
// Complete fallback
res.status(503).json({
success: false,
error: 'Service temporarily unavailable'
});
}
});
Configuration Management: Environment-Aware Services
Different environments need different configurations:
// config/index.ts
interface Config {
port: number;
jwtSecret: string;
redis: {
host: string;
port: number;
password?: string;
};
database: {
url: string;
ssl: boolean;
};
}
const development: Config = {
port: 3001,
jwtSecret: 'dev-secret-not-secure',
redis: {
host: 'localhost',
port: 6379
},
database: {
url: 'postgresql://localhost:5432/dev',
ssl: false
}
};
const production: Config = {
port: parseInt(process.env.PORT || '3001'),
jwtSecret: process.env.JWT_SECRET || (() => {
throw new Error('JWT_SECRET environment variable is required in production');
})(),
redis: {
host: process.env.REDIS_HOST || 'redis',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD // Required in production
},
database: {
url: process.env.DATABASE_URL || (() => {
throw new Error('DATABASE_URL environment variable is required in production');
})(),
ssl: true
}
};
export const config = process.env.NODE_ENV === 'production' ? production : development;
Configuration Best Practices:
Fail Fast: Missing required config should crash the service at startup
Environment Parity: Same code runs in all environments with different config
Secrets Management: Never commit secrets to version control
Validation: Validate configuration at startup
Monitoring and Observability: Seeing Inside Your System
In a distributed system, you need visibility into what's happening:
Request Tracing
import { randomUUID } from 'crypto';
const requestTracing = (req: Request, res: Response, next: NextFunction) => {
// Generate or extract trace ID
const traceId = req.headers['x-trace-id'] as string || randomUUID();
// Add to request context
req.traceId = traceId;
// Add to response headers (client can use for support requests)
res.setHeader('x-trace-id', traceId);
// Log request start
console.log(`[${traceId}] ${req.method} ${req.path} - START`);
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
console.log(`[${traceId}] ${req.method} ${req.path} - ${res.statusCode} - ${duration}ms`);
});
next();
};
Distributed Tracing:
When user-service calls auth-service, it forwards the trace ID:
// Forward trace ID in service-to-service calls
const response = await authClient.post('/auth/verify',
{ token },
{
headers: {
'x-trace-id': req.traceId
}
}
);
Now you can trace a request across all services:
[abc-123] Gateway: GET /api/users/profile/user123 - START
[abc-123] User Service: GET /users/profile/user123 - START
[abc-123] Auth Service: POST /auth/verify - 200 - 15ms
[abc-123] User Service: GET /users/profile/user123 - 200 - 45ms
[abc-123] Gateway: GET /api/users/profile/user123 - 200 - 67ms
Health Checks and Metrics
// Health check endpoint for each service
app.get('/health', async (req, res) => {
const health = {
service: 'user-service',
status: 'healthy',
timestamp: new Date(),
uptime: process.uptime(),
memory: process.memoryUsage(),
dependencies: {
redis: await checkRedisHealth(),
database: await checkDatabaseHealth(),
authService: await checkAuthServiceHealth()
}
};
const isHealthy = Object.values(health.dependencies).every(dep => dep.status === 'healthy');
res.status(isHealthy ? 200 : 503).json(health);
});
async function checkRedisHealth(): Promise<{ status: string; responseTime?: number }> {
try {
const start = Date.now();
await redis.ping();
return {
status: 'healthy',
responseTime: Date.now() - start
};
} catch (error) {
return { status: 'unhealthy' };
}
}
Testing Strategies for Microservices
Testing microservices requires different strategies than monoliths:
1. Unit Tests (Service Level)
// Test individual service logic
describe('AuthService', () => {
it('should create valid JWT token', async () => {
const authService = new AuthService();
const user = { id: 'user123', email: 'test@example.com' };
const token = await authService.createToken(user);
const decoded = jwt.verify(token, JWT_SECRET);
expect(decoded.userId).toBe('user123');
expect(decoded.email).toBe('test@example.com');
});
});
2. Integration Tests (Service Communication)
// Test service-to-service communication
describe('User Profile API', () => {
it('should return user profile with valid auth', async () => {
// Start test services
const authService = await startTestAuthService();
const userService = await startTestUserService();
// Create test user
const { token } = await authService.post('/auth/register', testUser);
// Test authenticated request
const response = await userService.get('/users/profile/user123', {
headers: { Authorization: `Bearer ${token}` }
});
expect(response.success).toBe(true);
expect(response.data.email).toBe(testUser.email);
});
});
3. Contract Tests (API Compatibility)
// Ensure services don't break each other's expectations
describe('Auth Service Contract', () => {
it('should return expected token verification response', async () => {
const mockResponse = {
success: true,
data: {
userId: 'user123',
email: 'test@example.com',
iat: 1234567890,
exp: 1234567890 + 3600
}
};
// Verify response matches ServiceResponse<AuthPayload> interface
const isValid = validateAuthResponse(mockResponse);
expect(isValid).toBe(true);
});
});
Deployment and Scaling Strategies
Development Workflow
# Start all services for development
docker-compose up -d
# View logs from specific service
docker-compose logs -f user-service
# Restart a service after code changes
docker-compose restart auth-service
# Scale a service (run multiple instances)
docker-compose up --scale user-service=3 -d
Production Considerations
1. Container Orchestration (Kubernetes)
# kubernetes/user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3 # Run 3 instances
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:v1.2.3
ports:
- containerPort: 3002
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
livenessProbe: # Kubernetes will restart unhealthy pods
httpGet:
path: /health
port: 3002
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
memory: "256Mi" # Guaranteed resources
cpu: "250m"
limits:
memory: "512Mi" # Maximum resources
cpu: "500m"
2. Service Mesh (Advanced)
For complex microservices deployments, consider service mesh (Istio, Linkerd):
Automatic load balancing
Circuit breakers
Mutual TLS between services
Traffic splitting for A/B testing
Centralized observability
Common Pitfalls and How to Avoid Them
1. The Distributed Monolith Anti-Pattern
❌ BAD: Services that must always be deployed together
┌─────────┐ sync ┌─────────┐ sync ┌─────────┐
│Service A│ ────→ │Service B│ ────→ │Service C│
└─────────┘ └─────────┘ └─────────┘
✅ GOOD: Services with clear boundaries
┌─────────┐ ┌─────────┐ ┌─────────┐
│Service A│ │Service B│ │Service C│
└─────────┘ └─────────┘ └─────────┘
│ │ │
└───────────────────┼───────────────────┘
│
┌─────────┐
│Message │
│Queue │
└─────────┘
2. Chatty Interface Anti-Pattern
// ❌ BAD: Multiple calls to get user data
const user = await userService.get(`/users/${userId}`);
const preferences = await userService.get(`/users/${userId}/preferences`);
const permissions = await userService.get(`/users/${userId}/permissions`);
// ✅ GOOD: Single call with all needed data
const userProfile = await userService.get(`/users/${userId}/profile`);
// Returns: { user, preferences, permissions }
3. Data Consistency Challenges
Problem: User updates their email in auth-service, but user-service still has the old email.
Solution: Event-driven eventual consistency
// auth-service
app.put('/auth/profile', async (req, res) => {
const user = await updateUser(userId, updates);
// Respond immediately
res.json({ success: true, data: user });
// Notify other services asynchronously
await publisher.publish(EventTypes.USER_UPDATED, {
userId,
changes: updates,
timestamp: new Date()
});
});
// user-service
subscriber.on(EventTypes.USER_UPDATED, async (payload) => {
await updateUserProfile(payload.userId, payload.changes);
console.log(`Profile updated for user ${payload.userId}`);
});
Performance Optimization Strategies
1. Connection Pooling
// Instead of creating new connections for each request
class HttpClient {
private client: AxiosInstance;
constructor(baseURL: string) {
this.client = axios.create({
baseURL,
httpAgent: new http.Agent({
keepAlive: true, // Reuse TCP connections
maxSockets: 50 // Limit concurrent connections
}),
httpsAgent: new https.Agent({
keepAlive: true,
maxSockets: 50
})
});
}
}
2. Caching Strategy
import Redis from 'ioredis';
class CacheService {
private redis: Redis;
async getUserProfile(userId: string): Promise<User | null> {
// Try cache first
const cached = await this.redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - fetch from database
const user = await database.findUser(userId);
if (user) {
// Cache for 5 minutes
await this.redis.setex(`user:${userId}`, 300, JSON.stringify(user));
}
return user;
}
async invalidateUser(userId: string): Promise<void> {
await this.redis.del(`user:${userId}`);
}
}
// Invalidate cache when user is updated
subscriber.on(EventTypes.USER_UPDATED, async (payload) => {
await cacheService.invalidateUser(payload.userId);
});
3. Database Per Service Pattern
Benefits:
Each service can choose optimal database technology
No shared database bottlenecks
Independent scaling
Challenges:
No foreign key constraints across services
Complex queries spanning multiple services
Data consistency requires careful design
Security in Microservices
1. Service-to-Service Authentication
// Generate service-specific JWT tokens
class ServiceAuthenticator {
private serviceSecret: string;
generateServiceToken(serviceName: string): string {
return jwt.sign(
{
service: serviceName,
type: 'service-to-service'
},
this.serviceSecret,
{ expiresIn: '1h' }
);
}
verifyServiceToken(token: string): boolean {
try {
const decoded = jwt.verify(token, this.serviceSecret);
return decoded.type === 'service-to-service';
} catch {
return false;
}
}
}
// Use in HTTP client
class HttpClient {
constructor(baseURL: string, serviceName: string) {
this.client = axios.create({ baseURL });
// Add service auth to all requests
this.client.interceptors.request.use((config) => {
const serviceToken = this.authenticator.generateServiceToken(serviceName);
config.headers['x-service-auth'] = serviceToken;
return config;
});
}
}
2. API Gateway Security
// Centralized security policies
app.use('/api/admin', adminAuthMiddleware); // Admin routes
app.use('/api', userAuthMiddleware); // User routes
app.use('/api/public', publicRateLimit); // Public routes with rate limiting
// Request validation
app.use('/api/users', requestValidator({
'PUT /api/users/:userId': {
params: { userId: 'string' },
body: {
email: 'email?', // Optional email validation
username: 'string?' // Optional string validation
}
}
}));
Testing Your Implementation
Let's test the complete flow:
1. Start the Services
# Build and start all services
docker-compose up --build -d
# Check all services are healthy
curl http://localhost:3000/health
curl http://localhost:3001/health # Direct auth service
curl http://localhost:3002/users/health # Direct user service
2. Test User Registration Flow
# Register a new user
curl -X POST http://localhost:3000/api/auth/register \
-H "Content-Type: application/json" \
-d '{
"email": "john@example.com",
"password": "securepassword123",
"username": "johndoe"
}'
# Expected response:
{
"success": true,
"data": {
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"user": {
"id": "user_1642675200000",
"email": "john@example.com",
"username": "johndoe"
}
}
}
Watch the logs to see the event flow:
docker-compose logs -f
# You should see:
# auth-service | [Event Published] user.created: {"userId":"user_1642675200000",...}
# user-service | [Event Received] user.created: {"userId":"user_1642675200000",...}
# user-service | [Profile Created] for user: user_1642675200000
3. Test Authentication Flow
# Login with the created user
LOGIN_RESPONSE=$(curl -s -X POST http://localhost:3000/api/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "john@example.com",
"password": "securepassword123"
}')
# Extract token from response
TOKEN=$(echo $LOGIN_RESPONSE | jq -r '.data.token')
# Use token to access protected endpoint
curl -X GET http://localhost:3000/api/users/profile/user_1642675200000 \
-H "Authorization: Bearer $TOKEN"
4. Test Service Communication
# This request will:
# 1. Go to API Gateway
# 2. Gateway forwards to User Service
# 3. User Service calls Auth Service to verify token
# 4. Auth Service responds with user info
# 5. User Service returns profile data
# 6. Gateway returns response to client
# Check the logs to see the inter-service communication:
docker-compose logs user-service | grep "HTTP"
# Should show: [HTTP] POST http://auth-service:3001/auth/verify
Production Readiness Checklist
Before deploying to production, ensure you have:
Infrastructure
Container registry setup
Kubernetes cluster or Docker Swarm
Load balancer configuration
SSL/TLS certificates
Database backup strategy
Monitoring setup (Prometheus, Grafana)
Log aggregation (ELK stack)
Security
Service-to-service authentication
API rate limiting
Input validation
Secret management (Vault, K8s secrets)
Network policies (service mesh)
Security scanning in CI/CD
Reliability
Health checks implemented
Circuit breakers configured
Retry policies defined
Graceful shutdown handling
Database connection pooling
Message queue durability
Observability
Distributed tracing
Metrics collection
Error tracking
Performance monitoring
Business metrics
Alerting rules
Conclusion: The Microservices Journey
Building microservices is not just about splitting code into smaller pieces—it's about designing resilient, scalable systems that can evolve independently. The architecture we've built demonstrates:
Key Principles Applied:
Single Responsibility: Each service has one job
Independence: Services can be developed, deployed, and scaled separately
Resilience: Failures are isolated and handled gracefully
Observability: You can see what's happening across the system
Security: Each layer is protected appropriately
What You've Learned:
When to use synchronous vs asynchronous communication
How to implement reliable message queues
Why API gateways are essential
How to handle distributed system failures
Container orchestration with Docker Compose
Production deployment considerations
Next Steps:
Add More Services: Implement notification, payment, or analytics services
Improve Observability: Add distributed tracing with Jaeger or Zipkin
Scale Horizontally: Use Kubernetes for production deployment
Implement Service Mesh: Add Istio for advanced traffic management
Add Event Sourcing: For complex business logic and audit trails
Remember: Start simple, measure everything, and evolve your architecture based on real needs. Microservices solve scaling problems, but they introduce complexity. Make sure the benefits outweigh the costs for your specific use case.
The code we've built is production-ready for small to medium-scale applications. As your system grows, you'll need to add more sophisticated patterns, but the foundation we've established will serve you well.
Happy building! 🚀
Subscribe to my newsletter
Read articles from Saurav directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Saurav
Saurav
CSE(AI)-27' NCER | AI&ML Enthusiast | full stack Web Dev | Freelancer | Next.js & Typescript | Python