Multi-Tenancy in SaaS: A Complete Code Walkthrough

Vitthal KusheVitthal Kushe
5 min read

πŸ“– Read Time: ~20 mins | πŸ‘¨β€πŸ’» Skill Level: Intermediate


🏒 Introduction: What is Multi-Tenancy?

Imagine an apartment building vs. individual houses.

  • In single tenancy, every customer gets their own "house" (dedicated infrastructure). Expensive, hard to maintain!

  • In multi-tenancy, customers share the same "building" (app instance), but each has their own isolated space (data, configs, and performance).

This is how SaaS companies like Slack, Shopify, and Salesforce serve millions efficiently!

❓ Why Should You Care?

If you're building a SaaS product, multi-tenancy helps you:
βœ” Reduce costs (no separate servers per customer)
βœ” Scale effortlessly (one codebase, shared resources)
βœ” Deploy updates faster (no version chaos)

But how does it work? Let’s break it down.


πŸ”§ How Multi-Tenancy Works: 3 Key Approaches

1️⃣ Database-Level Isolation

Problem: How do you store tenant data securely?

Two Strategies:

  • Separate Databases (Best isolation, but expensive)

  • Shared Database, Separate Schemas (Most common)

Example: A PostgreSQL DB where each tenant has its schema:

-- Tenant 1 Schema  
CREATE SCHEMA tenant1;  
CREATE TABLE tenant1.users (id SERIAL, name TEXT);  

-- Tenant 2 Schema  
CREATE SCHEMA tenant2;  
CREATE TABLE tenant2.users (id SERIAL, name TEXT);

βœ” Pros: Strong isolation, easy backups.
βœ– Cons: More complex queries.


2️⃣ Application-Level Isolation

Problem: How does the app know which tenant is accessing it?

Solutions:

Example in Node.js:

// Middleware to detect tenant  
app.use((req, res, next) => {  
  const tenantId = req.headers['x-tenant-id'] || req.hostname.split('.')[0];  
  req.tenant = getTenantDB(tenantId); // Switches DB connection  
  next();  
});

βœ” Pros: Flexible, works with any backend.
βœ– Cons: Requires careful session handling.


3️⃣ Kubernetes-Based Isolation (Infrastructure Level)

Problem: How do you deploy multi-tenant apps in Kubernetes without leaks?

Solution: Namespaces + Network Policies

  • Each tenant gets a dedicated namespace.

  • Network Policies block cross-tenant traffic.

  • Resource Quotas prevent noisy neighbors.

Example Kubernetes Setup:

# tenant1-namespace.yaml  
apiVersion: v1  
kind: Namespace  
metadata:  
  name: tenant1  

# tenant1-network-policy.yaml (No cross-namespace traffic!)  
apiVersion: networking.k8s.io/v1  
kind: NetworkPolicy  
metadata:  
  name: deny-cross-tenant  
  namespace: tenant1  
spec:  
  podSelector: {}  
  policyTypes: ["Ingress"]  
  ingress:  
  - from:  
    - namespaceSelector:  
        matchLabels:  
          name: tenant1

βœ” Pros: Strong security, scalable.
βœ– Cons: Needs Kubernetes expertise.


πŸ” Repository Structure Overview

Here's what you'll find in the repo (kubernetes-multitenant)

kubernetes-multi-tenant/
β”œβ”€β”€ manifests/
β”‚   β”œβ”€β”€ namespaces/
β”‚   β”‚   β”œβ”€β”€ tenant1-namespace.yaml
β”‚   β”‚   └── tenant2-namespace.yaml
β”‚   β”œβ”€β”€ network-policies/
β”‚   β”‚   └── deny-cross-tenant.yaml
β”‚   β”œβ”€β”€ deployments/
β”‚   β”‚   β”œβ”€β”€ tenant1-deployment.yaml
β”‚   β”‚   └── tenant2-deployment.yaml
β”‚   └── services/
β”‚       β”œβ”€β”€ tenant1-service.yaml
β”‚       └── tenant2-service.yaml
└── README.md

☸️ Kubernetes Configuration Breakdown

1. Namespace Isolation

File: manifests/namespaces/tenant1-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: tenant1
  labels:
    tenant: tenant1

What This Does:

  • Creates a dedicated namespace for Tenant1

  • Applies a tenant: tenant1 label for easy selection

  • Repeat for Tenant2 with identical structure but tenant2 values


2. Network Policy Enforcement

File: manifests/network-policies/deny-cross-tenant.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-cross-tenant
  namespace: tenant1
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          tenant: tenant1

Key Security Features:

  • podSelector: {} applies to all pods in the namespace

  • Only allows ingress traffic from same-tenant namespaces

  • Effectively blocks all cross-tenant communication


3. Tenant Deployments

File: manifests/deployments/tenant1-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tenant1-app
  namespace: tenant1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: multi-tenant-app
      tenant: tenant1
  template:
    metadata:
      labels:
        app: multi-tenant-app
        tenant: tenant1
    spec:
      containers:
      - name: app
        image: nginx:alpine
        ports:
        - containerPort: 80

Important Notes:

  • Deploys to tenant1 namespace specifically

  • Uses common app: multi-tenant-app label but tenant-specific tenant: tenant1 label

  • Simple nginx container for demonstration


4. Service Exposure

File: `manifests/services/tenant1-service.yaml**

apiVersion: v1
kind: Service
metadata:
  name: tenant1-service
  namespace: tenant1
spec:
  selector:
    app: multi-tenant-app
    tenant: tenant1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Service Discovery:

  • Other Tenant1 pods can access via tenant1-service.tenant1.svc.cluster.local

  • Completely isolated from Tenant2's services


πŸš€ Deploying the Demo

Follow these exact steps from the repository:

# Apply all configurations
kubectl apply -f manifests/namespaces/
kubectl apply -f manifests/network-policies/
kubectl apply -f manifests/deployments/
kubectl apply -f manifests/services/

# Verify deployment
kubectl get pods -n tenant1
kubectl get pods -n tenant2

# Test isolation (should fail)
kubectl exec -it -n tenant1 $(kubectl get pod -n tenant1 -o name | head -1) -- curl http://tenant2-service.tenant2.svc.cluster.local

πŸ” Testing Multi-Tenant Isolation

  1. Verify Namespace Isolation:

     kubectl get pods --namespace=tenant1
     kubectl get pods --namespace=tenant2
    
  2. Check Network Policies:

     kubectl describe networkpolicy deny-cross-tenant -n tenant1
    
  3. Test Cross-Tenant Communication:

     # This should fail due to network policies
     kubectl exec -it -n tenant1 $(kubectl get pod -n tenant1 -o name | head -1) -- curl http://tenant2-service.tenant2.svc.cluster.local
    

πŸ’‘ Key Takeaways from the Code

  1. Namespace Boundaries are your first line of defense

  2. Network Policies enforce strict isolation

  3. Labeling Strategy (tenant: tenant1) enables precise resource selection

  4. Service Discovery remains tenant-specific through DNS


πŸ“š Next Steps with This Repository

  1. Add Resource Quotas to prevent noisy neighbors

  2. Implement Tenant-Specific ConfigMaps for customization

  3. Add Database Backends with tenant isolation

  4. Deploy a Real Application instead of nginx


πŸ’¬ Discussion Points

After exploring the actual repository code:

  1. What other security layers would you add?

  2. How might you modify this for hybrid tenancy models?

  3. What monitoring would you implement across tenants?

Try the demo yourself and share your experiences in the comments!

#Kubernetes #MultiTenancy #DevOps #CloudNative #SaaSDevelopment

0
Subscribe to my newsletter

Read articles from Vitthal Kushe directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vitthal Kushe
Vitthal Kushe