Scalable Chat System Using WebSockets, Load Balancer, Redis, and DynamoDB

M S NishaanthM S Nishaanth
5 min read

In the world of modern chat applications, real-time communication is key. Whether it's for a messaging platform, collaborative tool, or gaming app, WebSockets offer a persistent and efficient channel for two-way communication. But what happens when you need to support thousands, or even millions, of users? That’s where a scalable architecture comes into play.

This blog will walk you through building a highly scalable chat microservice using WebSockets, Redis, DynamoDB, and a load balancer. We’ll explore how messages are routed, stored, and delivered efficiently across distributed services.


🌐 Key Components

1. WebSocket Servers

These are the actual servers that handle WebSocket connections. Each server manages the live socket connections for the users it serves.

2. Redis (Pub/Sub + Socket Map)

Redis is used to keep track of where users are connected (i.e., which socket server they are on) and to publish/subscribe messages across different WebSocket servers.

3. DynamoDB (Persistent Storage)

Every chat message is stored in DynamoDB for durability and history retrieval.

4. Load Balancer

Distributes new incoming socket connections evenly across your pool of WebSocket servers.


πŸ—ΊοΈ Architecture Diagram

      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚   Client   β”‚
      β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
            β”‚
            β–Ό
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Load Balancer
      β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
     β”Œβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”
     β–Ό      β–Ό       β–Ό
 β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
 β”‚ WS1  β”‚ β”‚ WS2  β”‚ β”‚ WS3  β”‚   (WebSocket Servers)
 β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
    β”‚        β”‚        β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
             β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚ Redis  β”‚ (Pub/Sub + Socket Map)
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜
             β–Ό
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚ DynamoDB β”‚ (Message Storage)
        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🧩 Setup Overview (Example):

ComponentAddress
Load Balancerws://chat.example.com
WebSocket Server 1ws://localhost:3001/ws
WebSocket Server 2ws://localhost:3002/ws
WebSocket Server 3ws://localhost:3003/ws
Redisredis://localhost:6379
Message APIhttp://localhost:4000
DynamoDBAWS managed

πŸšͺ User Connection Flow

1. User A opens a chat app

  • The frontend initiates a WebSocket connection:

      const socket = new WebSocket("ws://chat.example.com/ws?userId=userA");
    

2. Load Balancer Routes the Request

  • The load balancer forwards this request to one of the WebSocket servers (e.g., WS1).

  • WebSocket handshake occurs.

3. WebSocket Server (WS1) handles the connection

  • Extracts userId from query

  • Stores socket info in Redis:

      SET user:userA { socketId: xyz123, server: ws1 }
    

πŸ’¬ Sending a Message: User A βž” User B

First Message:

  1. User A sends:

     {
       "type": "message",
       "to": "userB",
       "message": "Hi"
     }
    
  2. WS1 receives the message

  3. WS1 looks up userB in Redis:

     GET user:userB
    
  4. Redis responds with ws3 (userB is connected to WS3)

  5. WS1 publishes to Redis pub/sub:

     channel: messages:ws3
     payload: {
       "to": "userB",
       "message": "Hi",
       "from": "userA"
     }
    
  6. WS3 receives it and sends it to userB via WebSocket

  7. Message is also saved to DynamoDB

Second Message:

  • No new socket, no new Redis write

  • Uses existing socket to send again

  • Still does Redis lookup and pub/sub for delivery


πŸ“΄ User Goes Offline

1. WebSocket Disconnects

  • Server receives socket.on('close')

  • Removes user from Redis

2. Another user sends them a message

  • Redis GET returns null

  • Server knows the user is offline

  • Stores message in DynamoDB with delivery status

3. User Comes Back Online

  • New WebSocket connects

  • Server updates Redis with new info

  • Checks for undelivered messages

  • Pushes them over the new socket


πŸšͺ Is Load Balancer a Socket Server?

No. A load balancer is not a socket server. It only handles routing the initial HTTP/WS upgrade request to one of the WebSocket servers.

What the Load Balancer Does:

  • Routes initial WebSocket request

  • Uses round-robin or sticky session

  • After upgrade, client connects directly to WS server

  • All further communication happens directly


🚚 Example With 10,000 Users

Infrastructure:

ComponentCount
Load Balancer1
WebSocket Servers3
Redis1
DynamoDB Table1

Load Balancer Behavior:

  • First user βž” WS1

  • Second user βž” WS2

  • Third user βž” WS3

  • Fourth user βž” WS1 (round-robin)

Each WS server handles ~3333 users, while Redis tracks all socket associations.


πŸ“¬ WebSocket Communication Endpoints

EndpointMethodDescription
/wsWSWebSocket upgrade endpoint
messages:wsXRedis PubSubChannel for WS server wsX
user:userIdRedis KeyStores socket ID and server for user
DynamoDB Table-Stores persistent messages

πŸš€ Horizontal Scaling and Communication Between Servers

Servers communicate using Redis Pub/Sub:

  • When WS1 needs to message a user on WS3, it publishes to messages:ws3 channel

  • WS3 is subscribed and receives it

No direct server-to-server calls are needed β€” Redis handles the communication.


✨ Conclusion

This architecture enables your chat app to:

  • Scale horizontally

  • Deliver messages in real-time

  • Store persistent history

  • Handle user reconnects and offline messages

And with Redis + DynamoDB, you get both speed and reliability. The load balancer ensures users are evenly spread and connections stay persistent without needing to restart sockets.

If you're building a high-performance chat backend, this is the blueprint to follow.


0
Subscribe to my newsletter

Read articles from M S Nishaanth directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

M S Nishaanth
M S Nishaanth