Building Microservices with AWS Lambda and Serverless Framework: A Journey from Issues to Solutions
Microservices architecture is a popular way to build apps that can grow, be maintained, and recover easily. Instead of one big application, it's split into smaller services, each handling its own task and data. This helps teams develop, test, and launch new features faster. The challenge, however, is that managing and deploying each of these services individually adds complexity.
In my experience building microservices with AWS Lambda and the Serverless Framework, I encountered several challenges, from structuring services to deployment automation. Here's how I tackled these challenges and built a reliable, serverless microservice architecture.
Microservice Architecture: The Serverless Approach
When moving to a microservices setup, the goal was to move away from one big system and allow each part of the application to be developed, deployed, and scaled separately. AWS Lambda was a great fit because it automatically scales as needed and you only pay for what you use.
To handle deployments, I went with the Serverless Framework. It makes it easy to set up and deploy serverless functions like Lambda, API Gateway, S3, and SNS. The framework also lets you manage infrastructure through code and has plugins to add extra features.
Challenges I Faced
- Folder Structure for Independent Services
When adopting microservices, I aimed to keep each service independent—allowing for isolated codebases, deployments, and clear boundaries. Initially, I used a single repository with separate folders for each service, all sharing a base serverless.yml file for deployment:
serverless.base.yml # Shared configurations
services/
├── service1/
├── service2/
├── service3/
└── service4/
Deployment Issues
This structure led to deployment challenges. Even if only service1 was updated, the deployment process would still redeploy service2, service3, and service4. This resulted in long deployment times and inefficiency.
The Solution: Independent Repositories
To fix this, I moved each service into its own repository, giving each service its own Express app, serverless.yml, and handler file. Now, each service can be deployed separately, like so:
service1-repo/serverless.yml
service2-repo/serverless.yml
service3-repo/serverless.yml
service4-repo/serverless.yml
2. Centralizing Database Schemas Across Microservices
In a microservices architecture, I faced a challenge with accessing data from one service in another. While I could copy the same database collections into each service, this approach wasn’t ideal. If a backend developer made changes to one service's collection and forgot to update the others, it could cause problems.
The Solution: Using AWS Common Layer
To solve this, I decided to use the AWS Common Layer. I had previously used it to upload Node modules for Lambda functions, but this time I used it to share database schemas across all services.
By using the Common Layer, I can centralize the database schemas, allowing all services to access the same models without duplicating code. This ensures consistency and makes it easier to manage changes across all services.
3.Updating and Reconfiguring serverless.yml
the Layer ARN in serverless.yml Every time I updated the shared database schema in the AWS Common Layer, I had to redeploy the layer. AWS automatically versioned the layer ARN, but then I needed to manually update the new ARN in the serverless.yml file for every service. This became time-consuming and error-prone, as each service had to be reconfigured with the latest version of the layer.
4. Implementing Event-Driven Architecture
In my microservices setup, I needed a way for services to communicate without tightly coupling them. For example, when one service updated data, other services needed to be notified and react accordingly. Using direct calls between services would create dependencies and make the system harder to scale and maintain.
The Solution: Pub/Sub Architecture with AWS SNS and SQS
To solve this, I implemented a Pub/Sub (Publish/Subscribe) architecture using AWS services. Here's how it works:
AWS SNS (Simple Notification Service): When an event occurs in one service that service publishes a message to an SNS topic.
AWS SQS (Simple Queue Service): Other services subscribe to the SNS topic using SQS queues. Each service gets the relevant event messages through its queue and processes them independently.
This setup allows services to communicate asynchronously. They are decoupled, which means if one service changes or experiences downtime, it doesn’t impact the others. This also makes it easier to scale individual services without breaking the overall system.
By adopting this approach, I was able to make my architecture more resilient, scalable, and easier to maintain.
5.Efficient Local Development Setup for Serverless Applications
When developing a serverless application, it’s crucial to have a smooth local development experience. I faced a challenge with ensuring that my AWS Lambda functions were running correctly offline and that changes in the code were picked up automatically. Additionally, I needed to address the cold start issue that can occur when my functions are deployed, leading to slower response times.
The Solution: Using Serverless Offline and Nodemon with Warmup Plugin
To overcome these challenges, I implemented the following solutions:
Using Serverless Offline Plugin: This plugin allows me to run my serverless application locally, simulating the AWS Lambda environment. It’s crucial for testing my functions without needing to deploy every time.
Configuring Nodemon: I set up Nodemon to watch for changes in my source files. I created a nodemon.json file with the following configuration:
{ "watch": [ "src", "handler.js" ], "ext": "js,ts,json", "exec": "serverless offline" }
This configuration allows Nodemon to automatically restart the Serverless Offline server whenever I make changes to the JavaScript, TypeScript, or JSON files in the src folder or handler.js. This significantly speeds up my development process by eliminating the need for manual restarts.Implementing the Warmup Plugin: To tackle the cold start problem for my deployed functions, I used the Warmup Plugin. This plugin keeps my functions warm by sending scheduled requests, ensuring they are always ready to respond quickly when triggered.
By combining these tools, I created a robust local development environment that allowed for rapid iteration while also addressing performance issues with deployed functions. This approach improved my workflow, making it easier to develop and test my serverless applications effectively.
In conclusion, building microservices with AWS Lambda and the Serverless Framework has been an enriching journey filled with challenges and solutions. By adopting a structured approach to service management, leveraging AWS Common Layer for schema sharing, implementing event-driven architecture, and optimizing local development, I have created a more resilient and efficient application. This experience has not only enhanced my technical skills but also deepened my understanding of microservices. I hope my insights can guide others on their path to building scalable and maintainable serverless applications.
Subscribe to my newsletter
Read articles from Dipak Kaluse directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by