A Beginner's Guide to System Design

Table of contents
- Step 1: Client Meets Server
- Step 2: DNS - The Internet's Phonebook
- Step 3: Server Capacity and Scaling
- Step 4: Horizontal Scaling (Scaling Out)
- Step 5: Load Balancer - The Traffic Cop
- Step 6: CDN - Delivery Trucks for Static Files
- Step 7: Caching - Store It for Later
- Step 8: Microservices - Dividing Responsibilities
- Step 9: API Gateway - The Front Door
- Step 10: Async Processing with Queues

Building a scalable and reliable system might sound complicated, but it's not when explained through simple real-world examples. Let’s imagine you are creating your version of Amazon — a high-traffic e-commerce platform. This blog post will guide you through essential system design concepts in plain English, connecting each component logically as your application grows.
Step 1: Client Meets Server
When a user opens your website or mobile app, they are the client. This could be any internet-enabled device — a smartphone, laptop, tablet, or even an IoT device.
The server is the computer (or machine) that hosts your application. It runs 24/7 and has a public IP address — just like your house has a postal address. This address is how users find your service online.
But since remembering IP addresses like 192.168.1.1
is hard, we need a friendlier way...
Step 2: DNS - The Internet's Phonebook
We use domain names like amazon.com
instead of IP addresses. The DNS (Domain Name System) works like the phonebook of the internet.
When you type amazon.com
in your browser:
Your browser contacts the DNS server
The DNS server returns the corresponding IP address
Your request is routed to the correct server
Analogy: Just like searching "Pizza Hut" in your phone contacts to get the number, DNS finds the IP address for a domain.
AWS Component: Amazon Route 53 handles domain registration and DNS resolution.
Step 3: Server Capacity and Scaling
Initially, your server might be a single virtual machine running on the cloud (like an EC2 instance in AWS). It has a fixed amount of CPU, memory (RAM), and storage.
As user traffic increases, say during a sale, the server might struggle to handle all requests.
Vertical Scaling (Scaling Up)
You upgrade the same server — more CPU, RAM, and storage.
Analogy: Adding more floors and staff to your single store.
Problems:
Costly
Introduces downtime during upgrades
Hardware limits eventually stop further scaling
Step 4: Horizontal Scaling (Scaling Out)
Instead of expanding one store, open multiple stores to serve more customers simultaneously.
This is horizontal scaling — spinning up more servers (EC2 instances) that handle requests in parallel.
Analogy: Opening branches of your shop in different cities.
Now comes the question — how do you send customers to the right branch?
Step 5: Load Balancer - The Traffic Cop
A Load Balancer distributes incoming traffic across multiple servers to ensure no single server gets overloaded.
Example:
If 100 customers arrive:
25 go to Server A
25 to Server B
and so on...
The Elastic Load Balancer (ELB) in AWS automatically checks which servers are healthy and distributes traffic using algorithms like Round Robin.
Analogy: A smart receptionist who directs incoming customers to the least busy counter.
Step 6: CDN - Delivery Trucks for Static Files
Your users could be in India, the US, or anywhere globally. Fetching files from your main server each time is slow and costly.
A CDN (Content Delivery Network) like CloudFront caches your static files (images, CSS, JS, videos) at edge locations closer to the users.
Analogy: Regional warehouses delivering items quickly to customers instead of shipping everything from HQ.
Benefit: Faster load times, lower bandwidth usage, better user experience.
Step 7: Caching - Store It for Later
Frequently accessed data like product details or user sessions, don’t need to be fetched from the database every time.
Instead, use caching to store it temporarily in memory using Redis (via Amazon ElastiCache).
Benefits:
Lightning-fast responses
Reduced database load
Example:
A product page with 10,000 views per hour can be cached to avoid hitting the database each time.
Analogy: Keeping FAQs written down instead of explaining them over and over.
Step 8: Microservices - Dividing Responsibilities
Instead of one massive codebase, divide your app into microservices, each with a single responsibility:
Auth Service — Login/Registration
Order Service — Place/view orders
Payment Service — Payment processing
Each service runs on its own EC2 instance (or container), can be scaled individually, and deployed separately.
Analogy: Dedicated counters at a shopping mall — billing, returns, customer service, etc.
Step 9: API Gateway - The Front Door
Users shouldn't need to know which microservice handles what.
An API Gateway acts as a central entry point and routes requests to appropriate services based on the URL path.
/auth
→ Auth Service/orders
→ Order Service/payments
→ Payment Service
AWS Component: Amazon API Gateway
Analogy: A receptionist who listens to your query and connects you to the right department.
Step 10: Async Processing with Queues
Certain actions — like sending order confirmation emails — don’t need to be done instantly.
Doing them synchronously blocks the response to the user.
Instead, use message queues like Amazon SQS to offload these tasks. A worker (EC2 instance or Lambda function) processes messages asynchronously.
Example:
Payment Service sends a message to the queue
Email Worker picks the message and sends the email
Analogy: A waiter takes your order and sends it to the kitchen. You don’t wait for the food to start a new task.
Bonus: Rate Limiting
External services like Gmail limit how many emails you can send per second. Your worker can be throttled to respect this limit.
This design ensures high availability, fault tolerance, and scalability by decoupling services and leveraging managed AWS components to handle different parts of the e-commerce workflow.
Subscribe to my newsletter
Read articles from Promise Dash directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Promise Dash
Promise Dash
Frontend developer with some backend knowledge. Learning new libraries and frameworks. Looking forward to contributing to open source