Step-by-Step Transition from Monolith to Microservices with Docker and Kubernetes


Let’s see how we can create a microservices architecture with the help of docker and kubernetes inside of a ubuntu machine.
Introduction
Microservices architecture has revolutionized modern software development by breaking down large monolithic applications into smaller, independent services. These services communicate through APIs, making applications more scalable, maintainable, and resilient. However, managing multiple microservices comes with challenges, such as service discovery, scaling, and orchestration.
To solve these challenges, Docker and Kubernetes have become the go-to tools for containerization and orchestration. Docker allows developers to package applications into lightweight, portable containers, while Kubernetes helps in managing and orchestrating these containers efficiently.
In this article, we will explore how to set up and work with microservices inside a Docker Machine, leveraging Docker and Kubernetes to create a seamless development and deployment workflow.
Understanding Docker
Docker is a containerization platform that allows developers to package applications and their dependencies into standardized units called containers. Unlike traditional virtual machines, containers share the host OS kernel, making them lightweight and efficient.
Key Concepts in Docker
Docker Engine – The core component responsible for running and managing containers.
Docker Image – A blueprint containing the application code and dependencies.
Docker Container – A running instance of a Docker image.
Dockerfile – A script that defines how a Docker image should be built.
Docker Compose – A tool to define and run multi-container applications using a
docker-compose.yml
file.
Docker simplifies microservices deployment by ensuring consistency across different environments, reducing compatibility issues between development, testing, and production.
Understanding Kubernetes
Kubernetes (often abbreviated as K8s) is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. It ensures that applications run reliably across different environments, whether on-premises or in the cloud.
Key Concepts in Kubernetes
Pods – The smallest deployable units that contain one or more containers.
Nodes – Physical or virtual machines that run pods.
Cluster – A collection of nodes managed by Kubernetes.
Deployments – Define how applications should be deployed and managed.
Services – Provide networking and load balancing for pods.
Ingress – Manages external access to services.
Kubernetes allows microservices to communicate seamlessly, ensures high availability through self-healing mechanisms, and automatically scales applications based on demand.
We gonna take a look on how to setup all that inside a ubuntu machine as our server. (I’m using ubuntu 20.04 since it supports latest versions)
we will build two NodeJs application using express so make sure the machine is setup with dependencies for node and let’s move to setup docker and kubernetes.
Make sure docker engine is installed on your machine following the docker documentation , and for kubernetes we gonna use kubectl
(Kubernetes Control) is the command-line tool used to interact with a Kubernetes cluster. It allows you to manage and control Kubernetes resources, deploy applications, inspect logs, and troubleshoot issues.
Let’s setup our NodeJs applications
mkdir -p ~/microservices/service1
cd ~/microservices/service1
# Initialize npm project
npm init -y
# Install dependencies
npm install express
const express = require('express');
const app = express();
const port = 3000;
app.use(express.json());
// Sample data
const products = [
{ id: 1, name: 'Product 1', price: 99.99 },
{ id: 2, name: 'Product 2', price: 149.99 },
{ id: 3, name: 'Product 3', price: 199.99 }
];
// API endpoints
app.get('/api/products', (req, res) => {
res.json(products);
});
app.get('/api/products/:id', (req, res) => {
const product = products.find(p => p.id === parseInt(req.params.id));
if (!product) return res.status(404).json({ message: 'Product not found' });
res.json(product);
});
app.get('/api/health', (req, res) => {
res.json({ status: 'healthy', service: 'product-service' });
});
app.listen(port, () => {
console.log(`Service 1 listening at http://localhost:${port}`);
});
this is the first app that contains the products we will be fetching directly from the second app, let’s continue.
We create a docker image for it with `Dockerfile`
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY app.js ./
EXPOSE 3000
CMD ["node", "app.js"]
and like that we are done with the first application, so we can move to the second.
mkdir -p ~/microservices/service2
cd ~/microservices/service2
# Initialize npm project
npm init -y
# Install dependencies
npm install express axios
const express = require('express');
const axios = require('axios');
const app = express();
const port = 3000;
// Get the API service URL from environment variables
const API_SERVICE_URL = process.env.API_SERVICE_URL || 'http://service1:3000';
app.get('/', (req, res) => {
res.send('Service 2 is running!');
});
app.get('/products', async (req, res) => {
try {
const response = await axios.get(`${API_SERVICE_URL}/api/products`);
res.json({
source: 'service2',
products: response.data
});
} catch (error) {
res.status(500).json({
error: 'Failed to fetch products',
details: error.message
});
}
});
app.get('/product/:id', async (req, res) => {
try {
const response = await axios.get(`${API_SERVICE_URL}/api/products/${req.params.id}`);
res.json({
source: 'service2',
product: response.data
});
} catch (error) {
if (error.response && error.response.status === 404) {
return res.status(404).json({ message: 'Product not found' });
}
res.status(500).json({
error: 'Failed to fetch product',
details: error.message
});
}
});
app.get('/health', (req, res) => {
res.json({ status: 'healthy', service: 'frontend-service' });
});
app.listen(port, () => {
console.log(`Service 2 listening at http://localhost:${port}`);
});
Lastly the docker image
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY app.js ./
EXPOSE 3000
CMD ["node", "app.js"]
We have our service, let’s build the docker images
# Build Service 1
cd ~/microservices/service1
docker build -t service1:latest .
# Build Service 2
cd ~/microservices/service2
docker build -t service2:latest .
Note: just to make sure everything is good run `sudo docker images` and check if the images we built are there.
Setting Up Kubernetes Deployment
mkdir -p ~/microservices/k8s
cd ~/microservices/k8s
Create the service1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service1
spec:
replicas: 1
selector:
matchLabels:
app: service1
template:
metadata:
labels:
app: service1
spec:
containers:
- name: service1
image: service1:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
cpu: "0.1"
memory: "128Mi"
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
selector:
app: service1
ports:
- port: 3000
targetPort: 3000
Create service2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service2
spec:
replicas: 1
selector:
matchLabels:
app: service2
template:
metadata:
labels:
app: service2
spec:
containers:
- name: service2
image: service2:latest
imagePullPolicy: Never
env:
- name: API_SERVICE_URL
value: "http://service1:3000"
ports:
- containerPort: 3000
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
cpu: "0.1"
memory: "128Mi"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: service2
spec:
selector:
app: service2
ports:
- port: 3000
targetPort: 3000
type: NodePort
Now to run all this, we gonna use `minikube`, let’s make sure it’s running
# If not started, start minikube
minikube start
# Enable the ingress addon (useful for later)
minikube addons enable ingress
Deploy the services we created to minikube
kubectl apply -f ~/microservices/k8s/service1.yaml
kubectl apply -f ~/microservices/k8s/service2.yaml
Check if everything is fine run this command
kubectl get pods
and check if the deployed services are running with 1/1
To use this get the service2 url since it is the one communicating with service1.
minikube service service2 --url
and then curl on it
curl <generated-IP-address>
Then access the endpoints:
Root endpoint:
<service-url>/
Products endpoint:
<service-url>/products
Specific product:
<service-url>/product/1
Monitoring and Troubleshooting
To check the status of your deployments:
bashCopykubectl get pods
kubectl get deployments
kubectl get services
To check logs from a pod:
bashCopykubectl logs <pod-name>
To describe a pod (for troubleshooting):
bashCopykubectl describe pod <pod-name>
Conclusion:
Deploying microservices using Docker and Kubernetes on a home server represents a powerful approach to modern application development and deployment. Through this article, we've explored how to set up two Node.js microservices that communicate with each other within a Kubernetes environment, specifically using Minikube for local development.
Key Takeaways
The journey from monolithic applications to microservices architecture offers numerous benefits:
Modularity: Each service can be developed, deployed, and scaled independently
Resilience: Failures in one service don't necessarily affect others
Technology Flexibility: Different services can use different technologies as needed
Development Agility: Smaller, focused teams can work on individual services
Scalability: Resources can be allocated precisely where needed
Docker containers provide the consistent packaging and isolation needed for microservices, while Kubernetes offers the orchestration layer that manages deployment, scaling, and communication between services.
Challenges and Considerations
While powerful, this approach does come with complexities:
Learning Curve: Kubernetes has a steep learning curve
Resource Requirements: Even minimal Kubernetes setups require significant resources
Network Complexity: Service-to-service communication requires careful configuration
Operational Overhead: Monitoring and maintaining multiple services requires attention
Final Thoughts
Building a microservices architecture is more than just a learning exercise—it's a practical way to develop real-world skills applicable to modern cloud-native development. By understanding the principles and practices demonstrated in this article, you're well-equipped to tackle more complex architectures and scenarios in both personal and professional projects.
The combination of Docker for containerization and Kubernetes for orchestration provides a solid foundation for reliable, scalable, and maintainable applications. As you continue your microservices journey, remember that the key to success lies in embracing the right balance of service granularity, team organization, and technological choices for your specific needs.
Subscribe to my newsletter
Read articles from oussama chahidi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
