Monitoring Stack for VPS: From Zero to Grafana Dashboard

Monitoring Stack for VPS: From Zero to Grafana Dashboard
In previous projects, I often used Prometheus-based monitoring stacks — but never had to install or configure them myself. This time, I decided to set up a complete observability stack on a single VPS from scratch, fully containerized and securely exposed via Cloudflare Tunnel.
The goal wasn't a full-scale observability platform, just a minimal yet practical setup tailored to actual needs.
Objectives
Monitor VPS system-level resources (CPU, memory)
Track container-level resource usage (CPU, memory)
Collect and label Docker logs
Isolate application errors from background noise (Traefik, monitoring, tunneling)
Display all this in a clean, unified Grafana dashboard
Architecture Overview
Each component runs in its own Docker container, inside a dedicated Docker network, without exposing any public ports. All interfaces are available behind Cloudflare Tunnel with Access protection.
Metrics Stack
node-exporter – host-level metrics (
localhost:9140
)cAdvisor – container-level metrics (
localhost:9130
)Prometheus – time-series collection + scraping (
localhost:9100
)
Logging Stack
promtail – reads logs from the Docker socket (not the log driver)
Loki – log storage with dynamic labeling (
localhost:9110
)
Visualization
Grafana – preconfigured with Prometheus and Loki datasources (
localhost:9120
)One main dashboard: system metrics and application errors
Public Access
All UI components are exposed via Cloudflare Tunnel using subdomains:
grafana.kreativarc.com
prometheus.kreativarc.com
Containers communicate via Docker-internal network aliases. Logs are automatically labeled with container name, image, and service for easier querying and filtering.
Challenges
Setting it up took more time than expected. The tools themselves are well-documented — the issues mostly came from the surrounding environment:
Docker’s internal DNS and service discovery quirks
Cloudflare Tunnel routing and token management
Grafana datasource mappings occasionally failing silently
In the end, I managed to isolate container-level issues and error spikes in real time, while keeping the noise from infrastructure logs to a minimum.
Conclusion
This monitoring setup provides a clean, focused observability solution for a single VPS. It avoids unnecessary complexity (no sidecars, no operators, no Kubernetes) while offering real visibility into host and container-level issues.
Not the most ambitious system — but practical, reliable, and enough to catch what's going wrong before it escalates.
Subscribe to my newsletter
Read articles from Arnold Lovas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Arnold Lovas
Arnold Lovas
Senior full-stack dev with an AI twist. I build weirdly useful things on my own infrastructure — often before coffee.