How LocalStack Works: Under the Hood

Abel TavaresAbel Tavares
5 min read

Introduction

Imagine building cloud applications without needing to touch the cloud itself. That’s where LocalStack steps in. It’s an open-source tool that emulates AWS services on your local machine, giving you a sandbox for development and testing without the trouble of real cloud infrastructure. Whether you're developing new features or experimenting with new tools like Terraform, LocalStack can save you both time and money.

What’s LocalStack?

In simple terms, LocalStack is like mini-AWS running locally. It replicates several AWS services such as S3, DynamoDB, Lambda, and SQS, allowing you to run and test your cloud-based applications without ever needing to connect to AWS.

Why is this cool?

  • Rapid Development: You skip the deployment step, so iteration is much faster.

  • No Surprise Costs: Avoid the unexpected AWS bill at the end of the month.

  • Safe Testing Environment: It’s isolated, so you won’t accidentally mess up your data.

Why Use LocalStack?

The biggest benefits are speed and cost-efficiency. AWS calls can be slow, and charges add up fast, especially when you’re experimenting. With LocalStack, you spin up a local instance of AWS services, making it perfect for integration testing and trying out new tools. Personally, I’ve used as a learning playground to get comfortable with Terragrunt, it’s an awesome way to practice and experiment without fear.

The Architecture Breakdown

LocalStack has a simple yet effective architecture, leveraging Docker and Python-based implementations:

  • Docker Container: LocalStack runs inside a Docker container, which isolates it from your host environment. This ensures a consistent setup across different machines.

  • Edge Proxy: At the core of LocalStack is an edge proxy (API gateway) that listens at http://localhost:4566. It routes incoming requests to the appropriate mock service.

  • Service Handlers: Each AWS service (like S3, DynamoDB, Lambda) has its own handler. These handlers mimic the real AWS APIs using Python libraries and custom code.

How Requests Are Handled

When you send an API request (using awscli, Terraform, or Terragrunt), LocalStack intercepts it and processes it locally. Here’s a quick look at how it flows:

  1. Edge Proxy: The request first hits the edge proxy at http://localhost:4566. This proxy figures out which AWS service the request is targeting based on the URL.

  2. Routing to Service Handlers: The edge proxy forwards the request to the appropriate service handler, like S3 or DynamoDB.

  3. Mocked Service Logic: The service handler processes the request using Python code that simulates the real AWS service’s behavior.

    • For S3, it might save an object to your local filesystem.

    • For DynamoDB, it stores the item in memory or an SQLite database.

  4. Returning the Response: The service handler sends back a response that looks exactly like a real AWS API response. Your SDK or CLI tool handles it seamlessly, as if it were talking to AWS.

Emulating AWS Services

LocalStack uses a mix of Python libraries and custom code to mimic AWS services. Here’s how it handles some popular services:

  • S3 (Simple Storage Service): LocalStack uses Moto (a Python library for mocking AWS) to emulate S3. It stores objects as files on your local disk. This way, you can run aws s3 cp commands without any real cloud interaction.

  • DynamoDB: DynamoDB operations are mocked using in-memory data or persisted with SQLite. You can run standard API calls like PutItem, Query, and Scan without needing a real DynamoDB instance.

  • Lambda: LocalStack can execute Lambda functions locally using Docker. It spins up a container for each invocation, running your function code in a sandbox environment.

  • SQS (Simple Queue Service): LocalStack handles SQS using a Python-based queue system. Messages are stored in memory, simulating typical SQS features like visibility timeout and message retention.

Managing Data and State

LocalStack can store data either in memory or persistently. Here’s what you need to know:

  • In-Memory Storage: By default, LocalStack uses in-memory storage, meaning all data is lost when the container stops. This is ideal for temporary testing.

  • Persistent Storage: You can configure LocalStack to use persistent storage (e.g., local volumes or SQLite), allowing data to persist across container restarts.

  • Service State: Each service maintains its own state (e.g., list of S3 buckets, DynamoDB tables). This state can be stored in memory or on the local filesystem, depending on your configuration.

Tip: To retain data, use a volume when running LocalStack:

docker run -d -p 4566:4566 -v localstack_data:/tmp/localstack localstack/localstack

Differences from Real AWS

While LocalStack does an impressive job of mimicking AWS, there are a few differences to keep in mind:

  • No Real Backend Connections: It doesn’t connect to actual AWS services. Instead, it mocks the responses locally.

  • Limited Feature Set: Some advanced features like S3 event notifications or DynamoDB streams might not be fully supported or behave differently.

  • Local Execution of Lambda: Instead of using AWS’s managed environment, Lambda functions are executed locally using Docker containers.

Challenges with LocalStack

Despite its many advantages, LocalStack isn’t without a few pain points:

  • Feature Parity: Not all AWS features are implemented. Some APIs might behave differently, especially for less common services.

  • Performance Limitations: Running everything locally can be slower, especially for complex, data-heavy operations.

  • Networking Issues: Services relying on DNS resolution (like S3 with virtual-hosted URLs) may face issues in a local setup.

Pro Tip: Always test critical components on real AWS infrastructure before moving to production.

Conclusion

LocalStack offers a powerful way to develop and test AWS-based applications locally. It intercepts AWS API requests and uses Python-based handlers to mimic real AWS behavior. You get the benefit of a cloud-like environment without the complexity or cost of using actual cloud resources. It’s a great tool for developers working on cloud projects, and if you’re experimenting with infrastructure tools like Terraform, it’s a game changer.

3
Subscribe to my newsletter

Read articles from Abel Tavares directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abel Tavares
Abel Tavares

I'm an Engineer, really into coding and making technology work for business. Obsessed with reading, writing and continuous learning.