Deploying Scalable Locust Performance Testing on GKE with Helm, PVC, and Prometheus Integration

Nishant SinghNishant Singh
4 min read

Overview

In this article, we’ll explore how to deploy a scalable Locust-based load testing environment on Google Kubernetes Engine (GKE) using Helm, with features like:

  • 🐍 Locust Master/Worker autoscaling via HPA

  • πŸ’Ύ Shared test file storage using PVC alternatives

  • πŸ” Secure access over VPN or Load Balancer

  • πŸ”§ Docker image compatibility on Windows


πŸ“¦ 1. Containerizing Locust for Kubernetes

Base Dockerfile (Linux-compatible):

FROM python:3.11-slim

WORKDIR /locust

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["locust", "-f", "main.py"]

πŸ‘‰ Important: Use Linux containers, not Windows ones. On Docker Desktop, ensure:

  • WSL2 backend is enabled

  • You’re using Linux containers (right-click Docker tray > β€œSwitch to Linux containers”)

Build your image:

build -t your-dockerhub-user/locust:latest .
docker push your-dockerhub-user/locust:latest

🧰 2. Helm Setup for Locust on GKE

Structure your Helm chart:

locust-chart/
  β”œβ”€β”€ templates/
  β”‚   β”œβ”€β”€ master-deployment.yaml
  β”‚   β”œβ”€β”€ worker-deployment.yaml
  β”‚   β”œβ”€β”€ service.yaml
  β”‚   └── hpa.yaml
  β”œβ”€β”€ values.yaml
  └── Chart.yaml

Key features:

  • Master & Worker pods

  • HPA-enabled (autoscaling based on CPU/memory)

  • Resource limits & requests

  • PVC/volume mount support

Example values.yaml (partial):

image:
  repository: your-dockerhub-user/locust
  tag: latest
  pullPolicy: IfNotPresent

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

hpa:
  enabled: true
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 70

Sample locust master deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: locust-master
  namespace: load
spec:
  replicas: 1
  selector:
    matchLabels:
      app: locust
      role: master
  template:
    metadata:
      labels:
        app: locust
        role: master
    spec:
      containers:
      - name: locust
        image: your-dockerhub-user/my-locust:latest
        envFrom:
          - configMapRef:
              name: locust-config
        ports:
          - containerPort: 8089
          - containerPort: 5557
          - containerPort: 5558
        command: ["sh", "-c"]
        args: ["locust -f /mnt/locust/$LOCUST_FILE $LOCUST_ARGS --master --web-host=0.0.0.0"]
        volumeMounts:
          - name: locust-volume
            mountPath: /mnt/locust
      volumes:
        - name: locust-volume
          persistentVolumeClaim:
            claimName: locust-pvc

Sample worker deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: locust-worker
  namespace: load
spec:
  replicas: 1 # Start with 1 and let HPA scale it
  selector:
    matchLabels:
      app: locust
      role: worker
  template:
    metadata:
      labels:
        app: locust
        role: worker
    spec:
      containers:
      - name: locust
        image: your-dockerhub-user/my-locust:latest
        envFrom:
        - configMapRef:
            name: locust-config
        command: ["sh", "-c"]
        args: ["locust -f /mnt/locust/$LOCUST_FILE --worker --master-host=locust-master"]
        volumeMounts:
          - name: locust-volume
            mountPath: /mnt/locust
      volumes:
        - name: locust-volume
          persistentVolumeClaim:
            claimName: locust-pvc

Sample HPA file

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: locust-master-hpa
  namespace: load
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: locust-master
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

Similar to this need to have for HPA file for worker and later it can be changed for helm as well

Deploy with:

helm install locust ./locust-chart -n load --create-namespace

🌐 3. Accessing Locust Web UI on GKE

You can expose Locust Master via:

A. Internal Load Balancer (ILB) + VPN

Use this service definition:

annotations:
  cloud.google.com/load-balancer-type: "Internal"

Then access http://<INTERNAL_IP>:8089 from a VPN-connected device.

B. External Load Balancer (Public)

For external access:

type: LoadBalancer

Then access via the EXTERNAL IP from kubectl get svc.


🧱 4. Handling Shared Test Files Without ReadWriteMany PVC

GKE’s default storage class only supports ReadWriteOnce.

Alternatives:

  1. FileStore (NFS) β€” Recommended (if available)

  2. Google Cloud Storage (GCS) + Init Container

  3. BusyBox Uploader Pod β€” To manually inject test files into mounted volumes

Note: emptyDir works only within the same node. Workers on different nodes won’t have access.

βœ… Best solution for scalability: GCS + initContainer to copy files into the pod’s volume.


πŸ§ͺ 5. Storing Results & Logs

  • Locust results are saved in the Master pod by default.

  • If using GCS: configure output location using --csv flag + GCS mount or gsutil sync.

  • Avoid having workers write directly β€” stick to read-only access for them.


πŸ” 6. Locust UI with Authentication

Locust by default doesn’t support built-in login. To secure the UI:

  • Use an Ingress Controller with basic auth

  • Add Nginx Ingress annotations

  • Or use an OAuth2 Proxy for SSO

Alternatively, deploy Locust behind an API Gateway with auth rules.


🧠 7. Useful Debugging Tips

  • Pod stuck in Pending? β†’ Likely PVC/node affinity issue

  • Service not reachable? β†’ Check if Load Balancer IP is internal or external

  • VPN issues? β†’ Confirm Cloud NAT or routes to internal IPs

  • Helm not templating? β†’ Check {{ .Release.Namespace }} and values structure

  • PVC ReadOnlyMany error? β†’ Default GCE storage class does not support it. Use GCS/FileStore


πŸ“Ž Final Thoughts

This setup gives you:

  • βš™οΈ Scalable locust testing on GKE

  • πŸ“‘ Flexible access via ILB/VPN or ELB

  • πŸ“‚ Smart file distribution using GCS

  • πŸ“Š Real-time metrics with Grafana

  • πŸ›‘οΈ Optional authentication on Web UI


πŸ“Œ Coming Soon (Optional Ideas)

  • CI/CD integration for performance tests

  • Auto-start test on deploy

  • Schedule tests via CronJob

  • GitOps approach with ArgoCD


If you found this guide helpful, follow me for more in-depth tutorials on DevOps, GCP, and performance testing!

Happy load testing! 🐜πŸ”₯

0
Subscribe to my newsletter

Read articles from Nishant Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nishant Singh
Nishant Singh