Deploying Scalable Locust Performance Testing on GKE with Helm, PVC, and Prometheus Integration


Overview
In this article, weβll explore how to deploy a scalable Locust-based load testing environment on Google Kubernetes Engine (GKE) using Helm, with features like:
π Locust Master/Worker autoscaling via HPA
πΎ Shared test file storage using PVC alternatives
π Secure access over VPN or Load Balancer
π§ Docker image compatibility on Windows
π¦ 1. Containerizing Locust for Kubernetes
Base Dockerfile (Linux-compatible):
FROM python:3.11-slim
WORKDIR /locust
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["locust", "-f", "main.py"]
π Important: Use Linux containers, not Windows ones. On Docker Desktop, ensure:
WSL2 backend is enabled
Youβre using Linux containers (right-click Docker tray > βSwitch to Linux containersβ)
Build your image:
build -t your-dockerhub-user/locust:latest .
docker push your-dockerhub-user/locust:latest
π§° 2. Helm Setup for Locust on GKE
Structure your Helm chart:
locust-chart/
βββ templates/
β βββ master-deployment.yaml
β βββ worker-deployment.yaml
β βββ service.yaml
β βββ hpa.yaml
βββ values.yaml
βββ Chart.yaml
Key features:
Master & Worker pods
HPA-enabled (autoscaling based on CPU/memory)
Resource limits & requests
PVC/volume mount support
Example values.yaml
(partial):
image:
repository: your-dockerhub-user/locust
tag: latest
pullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
hpa:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
Sample locust master deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-master
namespace: load
spec:
replicas: 1
selector:
matchLabels:
app: locust
role: master
template:
metadata:
labels:
app: locust
role: master
spec:
containers:
- name: locust
image: your-dockerhub-user/my-locust:latest
envFrom:
- configMapRef:
name: locust-config
ports:
- containerPort: 8089
- containerPort: 5557
- containerPort: 5558
command: ["sh", "-c"]
args: ["locust -f /mnt/locust/$LOCUST_FILE $LOCUST_ARGS --master --web-host=0.0.0.0"]
volumeMounts:
- name: locust-volume
mountPath: /mnt/locust
volumes:
- name: locust-volume
persistentVolumeClaim:
claimName: locust-pvc
Sample worker deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-worker
namespace: load
spec:
replicas: 1 # Start with 1 and let HPA scale it
selector:
matchLabels:
app: locust
role: worker
template:
metadata:
labels:
app: locust
role: worker
spec:
containers:
- name: locust
image: your-dockerhub-user/my-locust:latest
envFrom:
- configMapRef:
name: locust-config
command: ["sh", "-c"]
args: ["locust -f /mnt/locust/$LOCUST_FILE --worker --master-host=locust-master"]
volumeMounts:
- name: locust-volume
mountPath: /mnt/locust
volumes:
- name: locust-volume
persistentVolumeClaim:
claimName: locust-pvc
Sample HPA file
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: locust-master-hpa
namespace: load
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: locust-master
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Similar to this need to have for HPA file for worker and later it can be changed for helm as well
Deploy with:
helm install locust ./locust-chart -n load --create-namespace
π 3. Accessing Locust Web UI on GKE
You can expose Locust Master via:
A. Internal Load Balancer (ILB) + VPN
Use this service definition:
annotations:
cloud.google.com/load-balancer-type: "Internal"
Then access http://<INTERNAL_IP>:8089
from a VPN-connected device.
B. External Load Balancer (Public)
For external access:
type: LoadBalancer
Then access via the EXTERNAL IP from kubectl get svc
.
π§± 4. Handling Shared Test Files Without ReadWriteMany PVC
GKEβs default storage class only supports ReadWriteOnce
.
Alternatives:
FileStore (NFS) β Recommended (if available)
Google Cloud Storage (GCS) + Init Container
BusyBox Uploader Pod β To manually inject test files into mounted volumes
Note: emptyDir
works only within the same node. Workers on different nodes wonβt have access.
β Best solution for scalability: GCS + initContainer to copy files into the podβs volume.
π§ͺ 5. Storing Results & Logs
Locust results are saved in the Master pod by default.
If using GCS: configure output location using
--csv
flag + GCS mount orgsutil
sync.Avoid having workers write directly β stick to read-only access for them.
π 6. Locust UI with Authentication
Locust by default doesnβt support built-in login. To secure the UI:
Use an Ingress Controller with basic auth
Add Nginx Ingress annotations
Or use an OAuth2 Proxy for SSO
Alternatively, deploy Locust behind an API Gateway with auth rules.
π§ 7. Useful Debugging Tips
Pod stuck in Pending? β Likely PVC/node affinity issue
Service not reachable? β Check if Load Balancer IP is internal or external
VPN issues? β Confirm Cloud NAT or routes to internal IPs
Helm not templating? β Check
{{ .Release.Namespace }}
and values structurePVC ReadOnlyMany error? β Default GCE storage class does not support it. Use GCS/FileStore
π Final Thoughts
This setup gives you:
βοΈ Scalable locust testing on GKE
π‘ Flexible access via ILB/VPN or ELB
π Smart file distribution using GCS
π Real-time metrics with Grafana
π‘οΈ Optional authentication on Web UI
π Coming Soon (Optional Ideas)
CI/CD integration for performance tests
Auto-start test on deploy
Schedule tests via CronJob
GitOps approach with ArgoCD
If you found this guide helpful, follow me for more in-depth tutorials on DevOps, GCP, and performance testing!
Happy load testing! ππ₯
Subscribe to my newsletter
Read articles from Nishant Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
