๐ Building Monitoring on k3s: Deploying Prometheus + Grafana on a Dedicated Worker Node


โ Goal
In this post, I document how I deployed Prometheus and Grafana on a dedicated k3s worker node (k3s-worker-monitoring
) to create a modern observability stack for my homelab, following practices similar to what startups and tech companies use.
๐๏ธ Step 1: Prepare the Monitoring Worker Node
Create a VM on Proxmox named k3s-worker-monitoring
, and join it to the k3s cluster as a worker node.
๐๏ธ Step 2: Install Helm
Helm can be installed on any machine that issues Helm commands. It uses the kubeconfig of the local machine to communicate with the cluster.
On the control-plane node:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
๐๏ธ Step 3: Add Helm Repositories
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
๐๏ธ Step 4: Install Prometheus
First, create your prometheus-values.yaml
to:
Set resource limits.
Retain data for 3 days.
Pin Prometheus to the monitoring node.
Disable the built-in Grafana (we want to install it separately).
๐ prometheus-values.yaml
prometheus:
prometheusSpec:
retention: "3d"
resources:
requests:
cpu: "500m"
memory: "2Gi"
limits:
cpu: "1"
memory: "3Gi"
nodeSelector:
node-role.kubernetes.io/monitoring: "true"
grafana:
enabled: false
Then install Prometheus:
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack \
-n monitoring \
--create-namespace \
-f prometheus-values.yaml
๐๏ธ Step 5: Install Grafana
Create grafana-values.yaml
to:
Set resource limits.
Use a
NodePort
service.Pin Grafana to the monitoring node.
๐ grafana-values.yaml
adminPassword: "YourSecurePassword"
service:
type: NodePort
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "1Gi"
nodeSelector:
node-role.kubernetes.io/monitoring: "true"
Then install Grafana:
helm install grafana grafana/grafana \
-n monitoring \
-f grafana-values.yaml
๐๏ธ Step 6: Access Grafana
Find the NodePort:
kubectl get svc -n monitoring grafana
Example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.43.33.219 <none> 80:30109/TCP 92s
Access Grafana at:
http://10.160.15.21:30109/
Get the admin password:
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Login with:
Username:
admin
Password: (from the command above)
๐ Next Steps
1๏ธโฃ Apply Taints and Tolerations
Prevent workloads from running on the control-plane node and make sure monitoring apps only run on the k3s-worker-monitoring
node.
Example taint:
kubectl taint nodes k3s-control-plane node-role.kubernetes.io/control-plane=:NoSchedule
2๏ธโฃ Add Persistent Volume for Grafana
To prevent data loss (dashboards, configs) when the pod restarts.
3๏ธโฃ Connect Prometheus as a Data Source in Grafana
Go to Grafana โ Configuration โ Data Sources โ Add data source โ Prometheus.
URL:
http://prometheus-operated.monitoring.svc.cluster.local:9090
Save & Test.
4๏ธโฃ How Prometheus Gets Data
Prometheus scrapes:
Kubernetes metrics (via kube-state-metrics).
Node-level metrics (via Node Exporter).
Application metrics (if configured).
It uses ServiceMonitors and PodMonitors to automatically discover scrape targets in the cluster.
5๏ธโฃ Expose Grafana and Prometheus Publicly (Optional)
Use Ingress + cert-manager to provide a DNS name and TLS.
Protect with authentication or network restrictions.
โ Summary
This setup replicates how many startups handle monitoring:
Isolated workloads.
Resource control.
Observability via Prometheus + Grafana.
Prepared for scaling and external exposure.
๐ Future Enhancements
Add Loki for log collection.
Add Alertmanager notifications.
Enable persistent storage across all monitoring components.
Subscribe to my newsletter
Read articles from Ichiro Yamasaki directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
