Building Video Transcoding Service Using TurboRepo, NestJS, and React

In this project, I built a Video Transcoding Service using Turborepo, NestJS, React, Docker, and other tools. The system supports features like uploading videos, queue-based background processing, format conversion with FFmpeg, and HLS output with auto-bitrate support. This article walks through the architecture, tech stack, challenges faced, and some key lessons learned during the process.

Tech Stack

  • Turborepo – for monorepo orchestration

  • NestJS – backend and APIs (Auth, Upload, Queue Management)

  • React – frontend (upload form, progress viewer)

  • PostgreSQL + Prisma – database and ORM

  • BullMQ – job queue and worker system

  • Docker – isolated video processing environment

  • AWS S3 – for video storage (input & output)

Architecture Overview

The overall architecture follows this flow:

  1. User uploads a video via the frontend → backend receives it via NestJS.

  2. The backend uploads the raw video to AWS S3 and saves metadata in PostgreSQL.

  3. A new job is enqueued in BullMQ for video processing.

  4. A worker service picks the job and spins up a Docker container with:

    • S3 video URL

    • AWS credentials

  5. Inside the Docker container:

    • The video is downloaded from S3

    • Converted to .m3u8 using FFmpeg with multiple formats and auto-bitrate

    • The processed folder is uploaded back to S3

    • master.m3u8 URL is logged via stdout

  6. The worker listens to Docker logs, extracts the master.m3u8 URL, and updates the database.

Everything is designed to be fully decoupled, scalable, and cloud-native.

Challenges Faced

1. Choosing the Right Queue System

At first, choosing which queue system to use was frustrating. I didn’t want the overhead of Kafka or RabbitMQ just to manage basic jobs. I needed a simple, reliable, and Node.js-friendly solution.

I chose BullMQ — it offers Redis-based queues with good developer experience and async/await support.

2. Video Processing Inside Docker

Running FFmpeg inside Docker was a challenge. Some public images worked partially, but weren’t customizable or too heavy.

I built my own lightweight Docker image optimized specifically for FFmpeg and S3 integration. This allowed full control, faster spin-up, and smaller footprint.

3. Uploading from Inside Docker & Updating the Database

Uploading to S3 inside Docker is straightforward — but there's a twist:

  • We didn’t want to download the video on the main server

  • Docker doesn’t have access to the DB directly

  • We couldn’t easily "return" data from Docker

Solution: Instead of returning the processed URL via API or database, I made the Docker container log the master.m3u8 URL.
The worker listens to stdout, parses logs, and when a specific log is found (e.g., HLS_READY: <URL>), it updates the DB.
This lightweight pattern was clean, effective, and flexible.

Key Lessons Learned

  • Turborepo helped me manage shared types, interfaces, and services across multiple apps (frontend, backend, workers).

  • Docker is powerful but can be tricky when communicating with services outside of its context.

  • FFmpeg is a beast — combining formats, bitrates, and stream maps takes time and testing.

  • Streaming logs and designing your own communication protocols (like log-based status updates) can be extremely useful in decoupled systems.

  • BullMQ is enough for most video processing workloads unless you hit extreme scale.

What's Next?

Here are a few future improvements I’m planning:

  • Add retry & failure queue handling in BullMQ

  • Better job status dashboard with real-time updates

  • CDN integration for fast HLS delivery

  • Auth + token-based video access control

  • Support for more formats (e.g., audio-only, 4K rendering)

1
Subscribe to my newsletter

Read articles from Abhishek Shivale directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abhishek Shivale
Abhishek Shivale