How to Run Loki, Fluent-bit, Grafana, and Prometheus on k3s


This article details the setup and optimization of a Kubernetes observability stack using Fluentd, Loki, and Grafana to monitor container logs, with a focus on reducing storage size. It addresses issues with Loki's Write-Ahead Log (WAL) causing unexpected disk usage and provides solutions, including reducing log label cardinality and enabling the compactor. Additionally, it covers exporting Fluent Bit metrics to Prometheus using a custom ServiceMonitor due to JSON content type issues, and highlights Fluent Bit's resource efficiency through a custom Grafana dashboard.
Goals:
Setup Fluentd, Loki and Grafana to monitor container logs.
Tuning Fluentd and Loki to optimize the storage size.
Setup Prometheus to monitor Loki and fluent-bit
Ref: github repo - https://github.com/ichiroymsk/k3s-monitoring-stack
🔧 Loki Storage Issue: Debugging a Bloated WAL Directory
While building out my Kubernetes observability stack with Loki for log aggregation, I ran into an issue where Loki’s disk usage ballooned unexpectedly. After digging into it, I found the culprit: the WAL (Write-Ahead Log) directory was consuming over 40GB of space on one of my nodes in less than 15 minutes.
📍 Problem: Loki Storage Bloat
On one of my nodes, I checked the Loki PVC mount location and found this:
ichiro@k3s-worker-general:~$ sudo du -h --max-depth=1 /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki
13M /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki/boltdb-shipper-active
13M /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki/boltdb-shipper-cache
4.0K /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki/boltdb-shipper-compactor
43G /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki/wal
2.2G /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki/chunks
45G /var/lib/rancher/k3s/storage/pvc-57b02f45-53d4-4a54-80ad-44f45ed00e10_loki_storage-loki-0/loki
The WAL directory alone was 43GB, which is far larger than expected.
🧠 Understanding the WAL in Loki
Loki uses a Write-Ahead Log (WAL) to buffer incoming logs before they are processed and written into long-term storage (chunks). Under normal operation, the WAL should stay relatively small and be continuously flushed and cleared.
A large WAL usually means something is preventing Loki from completing the ingestion pipeline, such as:
Too many active streams (caused by high-cardinality labels).
Issues with compactor or chunk shipper components.
Configuration errors in log forwarders like Fluent Bit or Fluentd.
🕵️♂️ Comparison with a Healthy Node
On a properly working node, the WAL directory looked like this:
ichiro@k3s-worker-monitoring:~$ sudo du -h --max-depth=1 /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki
276M /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki/chunks
912K /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki/boltdb-shipper-active
232K /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki/boltdb-shipper-cache
4.0K /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki/compactor
612K /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki/wal
278M /var/lib/rancher/k3s/storage/pvc-e212081f-3f92-4e87-975a-748faacf4c5b_monitoring_storage-loki-0/loki
This confirmed the issue was isolated to one Loki pod, and that WAL bloating is not expected behavior.
🛠️ Solution
I made two key changes that resolved the issue:
Reduced log label cardinality in Fluent Bit:
I disabled or removed high-cardinality fields such as file path, Kubernetes annotations, and some kubernetes.labels.
This greatly reduced the number of unique log streams, helping Loki process and flush data faster.
Enabled the Loki compactor:
The compactor is required for cleaning up old WAL segments and chunk compaction when using the boltdb-shipper storage backend.
I verified that the compactor component was running and properly configured in the Loki Helm chart.
⚙️ Recommendations
✅ Enable the compactor when using boltdb-shipper:
compactor: enabled: true
✅ Set stream limits and WAL retention policies:
limits_config: max_streams_per_user: 5000 retention_period: 7d
✅ Tune Fluent Bit filters to drop unnecessary labels:
[FILTER] Name kubernetes Match kube.* Labels Off Annotations Off
✅ Monitor WAL and chunk directories regularly using du or storage dashboards.
Thoughts
This was a great reminder that log stream cardinality and proper component configuration are critical to a stable Loki setup—especially when using Helm and running in lightweight environments like k3s.
If you’re seeing unexpectedly high disk usage from Loki, check the WAL directory first—it might be quietly holding gigabytes of unprocessed data.
📊 Exporting Fluent Bit Metrics to Prometheus
While setting up observability for our log collection stack, I wanted to scrape Fluent Bit’s internal metrics directly into Prometheus. It turned out to be a little trickier than expected due to how Fluent Bit formats its metrics endpoint.
🐛 Problem: Prometheus Scraping Error
Prometheus was throwing the following error when trying to scrape metrics:
Error scraping target: received unsupported Content-Type "application/json" and no fallback_scrape_protocol specified for target
While setting up observability for our log collection stack, I wanted to scrape Fluent Bit’s internal metrics directly into Prometheus. It turned out to be a little trickier than expected due to how Fluent Bit formats its metrics endpoint.
This happens because Fluent Bit returns its metrics in JSON format by default, but Prometheus expects the text-based Prometheus exposition format.
I initially thought the issue might stem from the Helm chart I was using. However, it’s not specific to Helm—this is just how Fluent Bit’s metrics endpoint behaves out of the box.
🔧 Solution: Use a Custom
ServiceMonitor
To fix this, I deployed a custom ServiceMonitor resource that explicitly configures how Prometheus scrapes metrics from Fluent Bit.
Here’s a sample manifest:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: fluent-bit
namespace: monitoring
labels:
release: prometheus
spec:
selector:
matchLabels:
app.kubernetes.io/name: fluent-bit
namespaceSelector:
matchNames:
- logging
endpoints:
- port: http
path: /api/v1/metrics/prometheus
interval: 30s
scrapeTimeout: 10s
✅ Key Notes:
Make sure your Service for Fluent Bit is exposing the correct port and has the right labels.
Double-check that your Fluent Bit config has Prometheus metrics enabled.
If you’re using kube-prometheus-stack, ensure Prometheus is set to watch the correct namespace for ServiceMonitor resources.
With this setup, Prometheus successfully scrapes Fluent Bit metrics, letting us visualize internals like processed log volume, dropped records, and output throughput—perfect for ongoing tuning and troubleshooting.
🕵️♂️ Fluent Bit Resource Usage: Lightweight and Efficient
To ensure our log collection stack remains efficient, we closely monitor the resource usage of each Fluent Bit pod running on our K3s nodes. I built a custom Grafana dashboard (screenshot below) to visualize key metrics such as CPU usage, memory consumption, and I/O throughput.
What we observed was impressive:
CPU Usage: Fluent Bit consumes only a negligible amount of CPU—typically less than 0.01 cores per pod—even under constant log processing.
Memory Usage: Each pod consistently uses just 9–10 MiB of memory, demonstrating Fluent Bit’s low footprint.
Log Input/Output: We measured an input and output rate of around 1–1.5 records/sec, with log volume around 1–1.2 KB/sec. Despite the steady flow, resource usage remains stable and minimal.
This level of efficiency is ideal for edge or resource-constrained environments, especially in a distributed logging architecture. It validates our choice to deploy Fluent Bit as a lightweight agent on each node before routing logs to Loki. The dashboard also helps us track per-pod metrics in real time, making it easier to detect anomalies or memory leaks if they ever occur.
Subscribe to my newsletter
Read articles from Ichiro Yamasaki directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
