Building a Scalable Real-Time Driver Tracking System

SubhamSubham
9 min read

Building a Real-Time Delivery Tracking System Using Socket, Redis, Redis streams adapter, Kafka

I recently worked on an exciting project for a client—a delivery app similar to Zomato, where users can track their driver's location live on a map.

cat

The complete application used Flutter for the frontend with both NestJS and Golang powering different versions of the backend.

While I developed two separate implementations, this article focuses purely on the core tracking logic that's completely language-independent. If you're curious about the actual code, everything is available on GitHub: https://github.com/Subham-Maity/RTLS-Scale.

But don't worry about the specific programming languages—I've designed this guide to be accessible to anyone interested in understanding the fundamental architecture of real-time location tracking systems.

Let me walk you through how I built this prototype, how it works, and how to scale it for real-world applications.

Important disclaimer: this is not production-ready code, as a full commercial implementation would require additional business logic, security considerations, battery optimization, and many other factors I won't cover here. I'm also not addressing driver matching algorithms or distance calculations—this article focuses exclusively on the real-time tracking system architecture.

Along the way, I'll share insights from my experience, including practical advice on backend-frontend communication and what I learned about building reliable real-time systems. By the end, you'll understand exactly how that little moving dot on your food delivery app actually works behind the scenes!

rainbow

🔥Connect: https://www.subham.online

🔥Repo: https://github.com/Subham-Maity/RTLS-Scale

🔥Twitter: https://twitter.com/TheSubhamMaity

🔥LinkedIn: https://www.linkedin.com/in/subham-xam


How the Prototype Works

Imagine this: you open the prototype in a browser, and there are two buttons—Enter as User or Enter as Driver. Pretty straightforward, right?

home page

If you pick Driver, the app starts sending your location (latitude and longitude) to the server every few seconds.

driver

If you pick User, you see the driver’s location updating live on a map.

user

To test it, I opened the driver page on my phone and the user page on my laptop. I walked around a bit with my phone, and on my laptop, I could see my position moving on the map in real-time. It felt satisfying, like “haan, this is working!” But this was just a prototype. In a real app, you’d need proper authentication, middleware, and all that stuff. Here, my focus was on the core logic: how to send the driver’s location to the user continuously, without any hiccups.


Server 1: The Basic WebSocket Setup

Let’s start with the simplest way I did this, using WebSockets. The code for this is in

1. server (socket)/src/websockets/location.gateway.ts

Here’s how it works, step-by-step:

  1. Driver Sends Location: The driver’s app connects to the server using WebSockets and sends a send-location event with their latitude and longitude every few seconds. Think of it like the driver saying, “Hey server, here’s where I am right now!”

  2. Server Broadcasts It: The server listens for this event and sends the location to all connected clients (like the user’s app) using a receive-location event. It’s like the server shouting, “Everyone, here’s the driver’s new position!”

  3. User Updates Map: The user’s app listens for receive-location events and moves the driver’s dot on the map. Simple and quick.

That's it

For a small setup, this works like a charm. But then I started thinking—what if there are hundreds or thousands of drivers? Will this still hold up?

Is This Scalable? Not Quite

Nahh

Here’s where I hit a wall:

  • Too Many Connections: Every WebSocket connection uses server resources—CPU, memory, etc. With thousands of drivers and users, one server can’t handle it alone. It’ll slow down or crash.

  • Wasting Data: The server sends every driver’s update to all users. So if there are 100 drivers, each user gets 100 updates every few seconds, even though they only care about their own driver. That’s a lot of useless data clogging the system.

  • Adding More Servers: If I add more servers to share the load, how do I make sure the right updates reach the right users? Without some clever trick, it’s a headache. Assuming you're a clever programmer, feel free to drop any tricky solutions in the comments 🙂

Verdict: This is fine for a prototype or a small app with less than 100 drivers. But for a big delivery app? No chance—it’ll break.


Server 2: Adding Redis Pub/Sub

So, I needed a better way. That’s when I brought in Redis Pub/Sub. Redis is this super-fast in-memory store, and its publish-subscribe system is perfect for scaling real-time stuff. Check the code in

2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts. Here’s how I made it work, step-by-step:

2. server (socket + redis pub-sub)/src/websockets/location.gateway.ts

  1. Driver Publishes Location: When the driver sends a send-location event, the server doesn’t broadcast it directly. Instead, it publishes the location to a Redis channel called location-updates. Here’s the code:
   @SubscribeMessage('send-location')
   handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
     const locationData = {
       id: client.id,
       latitude: data.latitude,
       longitude: data.longitude,
     };
     this.pubSubService.publish('location-updates', JSON.stringify(locationData));
   }
  1. Server Subscribes and Targets Updates: The server subscribes to the location-updates channel and sends the update only to specific users using WebSocket rooms. Each driver has a room (named after their ID), and users join that room to track them. Here’s how it’s set up in the constructor:

     constructor(private pubSubService: PubSubService) {
       this.pubSubService.subscribe('location-updates', (message) => {
         const locationData = JSON.parse(message);
         this.server.to(locationData.id).emit('receive-location', locationData);
       });
     }
    

    And when a user wants to track a driver:

     @SubscribeMessage('track-driver')
     handleTrackDriver(client: Socket, driverId: string) {
       client.join(driverId);
     }
    
  2. Scaling with Multiple Servers: Redis makes this easy. Multiple NestJS servers can subscribe to the same location-updates channel. When a driver’s location is published, all servers get it and send it to the right room. No mess, no fuss.

iron man

Why This Is Better

  • Targeted Updates: Only users tracking a specific driver get their updates. No more flooding everyone with data they don’t need.

  • Horizontal Scaling: Add more servers, and Redis handles the coordination. Each server manages its own clients, and the load gets shared.

This is a big step up from the basic setup. But I found something even better—keep reading!

spidy


Server 3: Redis Streams Adapter for the Win

While Redis Pub/Sub was good, I stumbled upon the Redis Streams Adapter for Socket.IO, and it’s like Pub/Sub ka bada bhai 💀—more powerful and reliable. The code for this is in

3. server (socket + redis streams adapter)/src/redis/redis.module.ts.

3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.ts.

3. server (socket + redis streams adapter)/src/websockets/location.gateway.ts.

Here’s how it works, step-by-step:

  1. Set Up the Adapter: I created a RedisIoAdapter in 3. server (socket + redis streams adapter)/src/redis/redis-io-adapter.ts to use Redis Streams with Socket.IO:

     export class RedisIoAdapter extends IoAdapter {
       private redisClient: Redis;
       constructor(app: INestApplication, redisClient: Redis) {
         super(app);
         this.redisClient = redisClient;
       }
       createIOServer(port: number, options?: ServerOptions): any {
         const server = super.createIOServer(port, options);
         server.adapter(createAdapter(this.redisClient));
         return server;
       }
     }
    
  2. Driver Sends Location: Same as before—the driver sends a send-location event, and the server emits it to their room:

     @SubscribeMessage('send-location')
     handleLocation(client: Socket, data: { latitude: number; longitude: number }) {
       const locationData = {
         id: client.id,
         latitude: data.latitude,
         longitude: data.longitude,
       };
       this.server.to(client.id).emit('receive-location', locationData);
     }
    
  3. Users Track Drivers: Users join the driver’s room with a track-driver event:

     @SubscribeMessage('track-driver')
     handleTrackDriver(client: Socket, driverId: string) {
       client.join(driverId);
     }
    
  4. Magic of Streams: The Redis Streams Adapter handles everything else. It distributes updates across all server instances, ensures no messages are lost, and keeps rooms working seamlessly.

Why This Beats Pub/Sub

Why

Here’s a quick comparison:

FeatureRedis Pub/SubRedis Streams Adapter
ReliabilityIf a server is down, it misses updates.Stores messages, so servers catch up later.
ScalabilityGood for medium loads, but struggles with huge volumes.Uses consumer groups for big scale.
Message OrderOrder isn’t always guaranteed.Strict order, great for tracking.
Ease of UseYou manage pub/sub yourself.Socket.IO does it all—less code!
  • Reliability: If a server crashes with Pub/Sub, it misses updates. With Streams, messages are saved, so nothing gets lost.

  • Scalability: Streams can handle way more drivers and users with consumer groups splitting the work.

  • Simplicity: No need to write pub/sub logic—Socket.IO handles it behind the scenes.

This is perfect for a large app with lots of users. But what about massive scale? That’s where Kafka comes in.

Nerd


Future-Proofing with Kafka

Now, imagine your app grows huge—thousands of drivers, millions of users, and you want to do fancy things like analytics or logging alongside tracking. That’s when Kafka enters the picture. It’s a distributed streaming platform built for handling tons of real-time data.

Here’s the basic plan:

  • Driver sends location via WebSockets (send-location event).

  • Server pushes it to a Kafka topic, like driver-locations.

  • A consumer service reads from the topic and sends updates to users via WebSockets.

Kafka is overkill for small apps, but for enterprise-level scale, it’s a game-changer. I’ll add a Kafka setup to my GitHub repo soon—keep an eye out!


What to Tell Frontend Devs

Hm

As a backend dev, I was scratching my head about what to tell the frontend team. Turns out, it’s pretty simple:

  • Driver App:

    • Connect to the WebSocket server.

    • Send send-location events with latitude and longitude every few seconds.

    • Maybe show the driver’s own location on a map, if needed.

  • User App:

    • Connect to the WebSocket server.

    • Listen for receive-location events and update the map.

    • Send a track-driver event with the driver’s ID to join their room.

That’s it! The frontend devs will love how easy this is—just a few events, and the backend handles the heavy lifting.

hahaha


Comparing the Approaches

Let’s break it down with a table to see how these methods stack up:

ApproachProsConsBest For
Basic WebSocketsEasy to set up, works for small apps.Not scalable, sends too much data.Prototypes, small apps.
Redis Pub/SubScales better, targets updates.Misses updates if servers crash.Medium-sized apps.
Redis Streams AdapterReliable, scalable, less code.Slightly tricky to set up.Large apps with many users.
KafkaHandles huge scale, extra features.Too much for small apps, needs infra.Enterprise-level apps.

So, that’s the full story! From a basic prototype to scaling for a real delivery app, this is how you make real-time tracking work. The code’s all on GitHub—go check it out.

Next time you’re waiting for your food and watching that driver dot move, you’ll know what’s happening behind the scenes.

yo

Hope this clears things up—let me know if you have questions!

0
Subscribe to my newsletter

Read articles from Subham directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Subham
Subham