Building My Own Deployment Platform Was Supposed to Be Simple: A Modern Alternative to Vercel using Nestjs


Other technologies used: NextJS,
It started on weekends, as a way to learn and build at the same time, "I bet I could build something better." A few months later, I'm staring at a distributed system with five microservices, event-driven communication, and more complexity than I bargained for. But honestly? It's been the most educational project I've ever tackled, including DevOps and Backend Technologies.
What I Actually Built
Picture this: You connect your GitHub repository, and within minutes, your application is live with a custom domain, SSL certificate, monitoring dashboard, and deployment logs. Whether it's a React frontend, Node.js backend, or full-stack application, the platform figures it out and handles the deployment pipeline automatically.
But here's the kicker - everything runs on Kubernetes with proper isolation, scaling, and all the enterprise-grade features you'd expect from a professional platform. Each user gets their own sandbox environment, complete with resource limits and security boundaries.
The platform itself? A collection of NestJS microservices talking to each other through events, managing PostgreSQL databases, and orchestrating complex deployment workflows. It's like having a DevOps team in software form.
System Architecture Overview
Each service owns its domain completely - no shared databases, no tight coupling. They communicate purely through events, making the system incredibly resilient and scalable.
The Backend Deep Dive
This is where things get spicy. Instead of cramming everything into one giant application, I split functionality across five specialized microservices. Each one handles a specific domain and communicates through a message bus.
API Gateway: The Traffic Controller
The gateway is way more than just a reverse proxy. It's the orchestration layer that handles authentication, rate limiting, request validation, and service coordination.
Authentication Flow: When users hit the login endpoint, the gateway handles the entire OAuth dance. It validates the authorization token, and authorizes user to make protected routes. Every API call goes through token validation here - no service downstream needs to worry about auth.
Request Orchestration: Some operations require multiple services to work together. When a user wants to deploy a project, the gateway coordinates between the users service (to verify permissions), the pipeline service (to start the deployment), and the notification service (to send updates). It's like having a conductor for an orchestra of services.
Pipeline Microservice: The Deployment Engine
This is the heart of the platform - the service that takes source code and turns it into running applications.
Build Detection & Orchestration: When a deployment starts, the service analyzes the repository structure to determine build strategy. It creates timestamped container images, sets up GitHub webhooks automatically, and coordinates with multiple external services.
The workflow looks something like this:
Project Creation: Creates database entries for project tracking
Namespace Provisioning: Each user gets isolated Kubernetes namespaces
GitHub Integration: Sets up directories in a GitOps repository and configures webhooks
Container Building: Triggers GitHub Actions to build and push container images
ArgoCD Deployment: Creates ArgoCD applications for GitOps-style deployments
Monitoring Setup: Automatically configures health checks and metrics collection
Rebuild Management: The rebuild process is where it gets interesting. When code changes are pushed to GitHub, the service generates new container tags, rebuilds images, updates Helm charts in the GitOps repository, and forces ArgoCD to sync the changes. It's a complex choreography of multiple systems working together.
Project Lifecycle Management: The deletion process shows how complex cleanup can be - stopping monitoring, removing Cloudflare DNS entries, cleaning up ArgoCD applications, and cascading through all the database relationships. It's like performing surgery on a running system.
Users Microservice: Identity & Access
This service owns everything related to user identity, which includes roles.
Project Ownership: Each project is tied to a user and their email, creating clear ownership boundaries. The service tracks project counts, deployment limits, and resource quotas per user.
Notification Microservice: Keeping Everyone Informed
Real-time communication is crucial for a good developer experience. This service handles the complex task of keeping users informed about their deployments.
Dual Channel Communication: Notifications go through both NATS events (for internal service communication) and direct HTTP calls to N8N webhooks (for external integrations). This redundancy ensures users always know what's happening with their deployments.
Event Processing: The service subscribes to build status events and transforms them into user-friendly notifications. Whether it's a build starting, completing, or failing, users get immediate feedback through multiple channels.
The Event Bus: NATS serves as the central nervous system. Services publish events when significant things happen, and other services subscribe to events they care about. This loose coupling means I can add new features by simply subscribing to existing events.
Data Consistency: Since services don't share databases, maintaining consistency requires careful event choreography. When a deployment completes, multiple services update their local state based on the events they receive.
Database Strategy & Architecture
Each microservice owns its data completely - no shared databases, no foreign key references across service boundaries. The users’ service has its PostgreSQL instance with user profiles and project metadata. The pipeline service maintains deployment history, build logs, and container image records.
Entity Relationships: Within each service, the data model is quite sophisticated. Projects have repositories, repositories have images, and everything connects through proper foreign key relationships. But these relationships never cross service boundaries.
The Real-World Deployment Flow
Let me walk you through what actually happens when you deploy a project, based on the real implementation:
Step 1: Project Initialization
Creates project entry with user ID and repository name
Provisions a Kubernetes namespace for isolation
Sets up GitOps directory structure
Configures Cloudflare subdomain
Step 2: Build Process
Generates timestamped container image tags
Triggers GitHub Actions for container building
Creates a webhook for future code changes
Updates the database with image metadata
Step 3: Deployment Orchestration
Updates Helm charts with new image references
Creates an ArgoCD application for GitOps deployment
Starts monitoring and health checks
Sends real-time status updates to the frontend
Step 4: Monitoring & Maintenance
Tracks deployment status and resource usage
Handles rebuild requests from webhook events
Manages scaling and resource optimization
Provides logs and metrics to users
What I Learned Building This
Distributed Systems Are Hard: The complexity isn't in any single service - it's in the interactions between them. Debugging issues that span multiple services, handling partial failures, and maintaining data consistency requires a completely different mindset.
Event-Driven Architecture is Powerful: Once you embrace events, the system becomes incredibly flexible. New features can often be added by subscribing to existing events without modifying existing services.
External Integrations Multiply Complexity: GitHub webhooks, ArgoCD APIs, Kubernetes cluster management, Cloudflare DNS - each integration point is a potential failure mode that needs handling.
Database Design Matters More: When services can't rely on joins across boundaries, each service's schema needs to be carefully designed for its specific access patterns.
Observability is Critical: With multiple services, proper logging, metrics, and tracing aren't nice-to-have features - they're essential for understanding what's happening in your system.
The best part? It actually works well enough that I use it for all my personal projects now. Sometimes the best way to learn is to build something you'll actually use.
The journey continues...
Subscribe to my newsletter
Read articles from Muthuri KE directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
