Fluent Bit + OpenObserve: The Ultimate Guide to Log Ingestion & Visualization


You can’t fix what you aren’t aware of. That’s why we log events to gain clear insight into what’s happening inside your system, even when you are not watching. Logs are one of the best ways, and sometimes the only option, to identify and fix issues.
Generally, applications emit so much data in a live environment that extracting meaningful information from a large dataset can be exhausting — especially when a production tech stack runs multiple types of applications, each outputting data in its own format and often scattered across systems. Things get more complicated for developers or DevOps engineers who are not proficient in UNIX text-processing tools like grep
, sed
, awk
, etc. Identifying the problem is only the first step, and in production, time is never on your side. In those moments, we all wish we could visualize everything and see the whole picture instantly instead of sifting through endless lines of raw logs and columns.
Managing or implementing such a system is no mean feat. Managed solutions cost money or require you to send data outside your infrastructure, and it’s not always easy to justify the big bucks or the security trade-offs for something that isn’t a direct source of revenue. There are so many options available, but they are either expensive or fall short. After trying a plethora of combinations, I finally found the holy grail for logging infrastructure: Fluent Bit and OpenObserve. Both of these projects are open source, have free versions, are actively maintained, can be self-hosted easily, and work great in tandem.
In this guide, we’ll build a complete logging pipeline, from raw Nginx logs to ingestion and visualization, using Docker Compose to set up Fluent Bit and OpenObserve step by step. By the end, you’ll have a fully functional, end-to-end logging system that you can run locally for development or adapt for production environments.
Architecture Overview
Nginx writes its access and error logs to disk, where Fluent Bit picks them up, processes them according to our requirements, and sends them to OpenObserve via HTTP ingestion. LogRotate automatically handles log file rotation, preventing uncontrolled growth and archiving old data according to the defined retention policy. OpenObserve stores the logs—backed by a PostgreSQL metadata store—and makes them available for search and visualization through its web interface, as shown in the diagram.
Setting up Nginx, OpenObserve & PostgreSQL Containers
We’ll start by spinning up three containers, Nginx for generating logs, PostgreSQL as the metadata store for OpenObserve, and OpenObserve itself for log ingestion, storage, and visualization.
Here’s the docker-compose.yml
snippet:
name: logging-infra
services:
nginx:
image: nginx:alpine
container_name: nginx
ports:
- 127.0.0.1:8080:80
volumes:
- ./data/logs/nginx:/var/log/nginx
restart: always
depends_on:
- postgres
postgres:
image: postgres:17
container_name: postgres
ports:
- 127.0.0.1:5432:5432
volumes:
- ./data/postgres:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=openobserve
restart: always
openobserve:
image: openobserve/openobserve:v0.15.0-rc5
container_name: openobserve
ports:
- 127.0.0.1:5080:5080
volumes:
- ./data/openobserve:/data
environment:
- ZO_META_STORE=postgres
- ZO_COMPACT_DATA_RETENTION_DAYS=14
- ZO_ROOT_USER_EMAIL=test@mirzabilal.com
- ZO_ROOT_USER_PASSWORD=test1234
- ZO_META_POSTGRES_DSN=postgres://postgres:test1234@postgres:5432/openobserve?sslmode=disable
Notes:
The Nginx container will store logs in
./data/logs
.PostgreSQL stores OpenObserve’s metadata in
./data/postgres
.Data retention in OpenObserve is set to 14 days for this demo, but you can adjust as needed.
After deploying with docker-compose up
, you’ll be able to access the Nginx demo server at http://localhost:8080 and the OpenObserve server at http://localhost:5080. You can also view the logs being generated in ./data/logs/nginx
.
OpenObserve HTTP credentials
OpenObserve HTTP credentials are different from dashboard login credentials. While the username is the same, the password is different. This is where OpenObserve’s excellent documentation comes to the rescue. After logging into the OpenObserve dashboard, navigate to:
Data Sources → Custom tab → Fluent Bit
You’ll see a sample Fluent Bit output configuration. In that configuration, locate the Passwd
parameter — copy its value. We’ll need this password in the next step.
Setting up Fluent Bit Container
At this point, we have Nginx generating logs and OpenObserve ready to ingest and visualize them, but nothing is actually sending the logs over yet. This is where Fluent Bit comes in. Fluent Bit will tail Nginx log files, parse them into structured events, and forward them securely to OpenObserve using the HTTP API credentials we just retrieved.
Let’s add the Fluent Bit service to our docker-compose.yml
. Now update the OPENOBSERVE_API_PASSWORD
environment variable with the password we saved in the last step (from the Passwd
field in OpenObserve’s Fluent Bit sample configuration). This value will be used in the Fluent Bit config files to authenticate with OpenObserve.
fluent-bit:
image: fluent/fluent-bit:4.0.7
container_name: fluent-bit
volumes:
# Log volumes
- ./data/logs:/var/log/host:ro
- ./data/logs/fluentbit-out:/fluentbit-out
- ./data/logs/fluentbit-out/fluentbit-db:/fluentbit-db
# Mapping local config files and scripts
- ./config/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf:ro
- ./config/parsers.conf:/fluent-bit/etc/parsers.conf:ro
- ./config/scripts:/fluent-bit/etc/scripts:ro
restart: always
environment:
- OPENOBSERVE_API_USERNAME=test@mirzabilal.com
- OPENOBSERVE_API_PASSWORD=T231Fquv # This needs to be updated
Creating Fluent Bit Configurations
Now that our log sources, Nginx access and error logs are ready, we need to configure Fluent Bit to:
Tail the log files we mounted from Nginx.
Apply the appropriate parser depending on the log type.
Send the structured events to the OpenObserve HTTP ingestion API.
We’ll create two configuration files: config/fluent-bit.conf
and config/parsers.conf
.
1. Creating parsers.conf
We will start with Nginx access log format. Here’s a real example:
172.19.0.1 - - [11/Aug/2025:19:46:57 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:141.0) Gecko/20100101 Firefox/141.0" "-"
Our goal is to break this single log entry into structured fields:
Field | Extracted Value |
client_ip | 172.19.0.1 |
ident | - |
user | - |
time | 11/Aug/2025:19:46:57 +0000 |
method | GET |
path | / |
protocol | HTTP/1.1 |
status | 304 |
size | 0 |
referer | - |
agent | Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:141.0) Gecko/20100101 Firefox/141.0 |
To do this, we’ll define a regex-based parser in parsers.conf
that matches the log format and assigns each captured group a meaningful name. These field names will make it easier to query and visualize the data in OpenObserve.
We’ll also add a second parser to handle Nginx error logs.
Here’s the config/parsers.conf
:
[PARSER]
Name nginx
Format regex
Regex ^(?<client_ip>\S+) (?<ident>\S+) (?<user>\S+) \[(?<time>[^\]]+)\] "(?<method>\S+) (?<path>[^"]*?) (?<protocol>HTTP\/[^"]+)" (?<status>\d{3}) (?<size>\d+|-) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
Time_Keep On
[PARSER]
Name nginx_error
Format regex
Regex ^(?<time>[0-9]{4}/[0-9]{2}/[0-9]{2} [0-9:]+) \[error\] \d+#\d+: \*\d+ (?<message>.*?), client: (?<client_ip>[^,]+), server: (?<server>[^,]+), request: "(?<method>\S+) (?<path>\S+) (?<protocol>[^"]+)", host: "(?<host>[^"]+)", referrer: "(?<referer>[^"]+)"
Time_Key time
Time_Format %Y/%m/%d %H:%M:%S
Time_Keep On
2. Creating fluent-bit.conf
With our parsers file read to transform logs and error in to an structural format, we can configure Fluent Bit to tail the Nginx access and error logs, tag events, and send them to OpenObserve.
Our config/fluent-bit.conf
sets up two tail inputs for the mounted log files, applies the correct parser for each, enriches them with metadata (e.g., log_type
and stage
), and outputs the structured JSON to OpenObserve’s HTTP ingestion endpoint.
[SERVICE]
flush 1
daemon Off
log_level info
parsers_file parsers.conf
plugins_file plugins.conf
storage.metrics on
Mem_Buf_Limit 10MB
Skip_Long_Lines On
[INPUT]
Name tail
Path /var/log/host/nginx.access.log
Tag nginx.access
Parser nginx
DB /fluentbit-db/nginx-access.db
[INPUT]
Name tail
Path /var/log/host/nginx.error.log
Tag nginx.error
Parser nginx_error
DB /fluentbit-db/nginx-error.db
[FILTER]
Name modify
Match nginx*
Rename time @timestamp
[FILTER]
Name modify
Match nginx.access
Add log_type access
[FILTER]
Name modify
Match nginx.error
Add log_type error
[FILTER]
Name modify
Match *
Add stage production
[OUTPUT]
Name http
Match nginx.*
URI /api/default/nginx/_json
Host openobserve
Port 5080
tls Off
Format json
Json_date_key @timestamp
Json_date_format iso8601
HTTP_User ${OPENOBSERVE_API_USERNAME}
HTTP_Passwd ${OPENOBSERVE_API_PASSWORD}
compress gzip
This Fluent Bit config has four sections:
[SERVICE]
→ Sets global options like buffer size, logging level, parser files, and handling long lines.[INPUT]
→ Tails Nginx access and error logs, tags them, and applies the correct parser.[FILTER]
→ Renames thetime
field to@timestamp
, addslog_type
based on source, and setsstage
toproduction
.[OUTPUT]
→ Sends processed logs in JSON format to OpenObserve over HTTP with authentication and gzip compression.
Fluent Bit Configuration with Lua Enhancements
With the Fluent Bit setup, we’ve configured tail inputs for both access and error logs, applied relevant parsers, tagged and enriched events with metadata, and sent everything off to OpenObserve in JSON format.
A real world example would be to make timestamps consistent across these logs (since Nginx’s access and error logs use different formats), a Lua script can normalize the timestamps to ISO 8601
before forwarding them. This example Lua script, named normalize_time.lua
, is available in the GitHub repository:
View the timestamp normalization Lua script
To use it, add a Lua filter to your Fluent Bit configuration:
[FILTER]
Name lua
Match *
script scripts/normalize_time.lua
call normalize_timestamp
Full Working Example
If you’d like to see a complete, ready-to-run implementation of everything covered in this article—including Nginx, Fluent Bit, OpenObserve setup, parsers, the Lua timestamp normalization script, and a Logrotate service to automatically clean up logs based on your requirements so they never grow out of control—you can check out the GitHub repository:
Feel free to explore, adapt, and use it in your own projects.
⭐ If you find it helpful, consider giving the repo a star—it really helps others discover it and shows appreciation for the work.
Final Thoughts
With the setup in this article, you can stream, parse, normalize, and visualize Nginx logs in real time using Fluent Bit and OpenObserve.
For production environments, make sure you handle sensitive credentials securely. For example:
On AWS, use AWS Secrets Manager
On GCP, use Secret Manager
On Azure, use Key Vault
Avoid storing API keys or passwords directly in configuration files.
By combining proper parsing, normalization, and secure credential management, you’ll have a robust, scalable, and secure logging pipeline ready for real-world workloads.
More From the Author
Is AWS Deep Learning AMIs saving you time? A counterproductive approach hindering your progress! Learn About the Issues and Solution at How Your AWS Deep Learning AMI is Holding You Back
A Cost-Effective Creating Deep Learning AMIs deployment with Spot Instances: Navigate parallel frontier! Read the full article at: Deep Learning on AWS Graviton2 powered by Nvidia Tensor T4g
Fine-Tune FFmpeg for Optimal Performance with detailed compilation guide: Optimize your multimedia backend for optimal performance!. A comprehensive guide is available at Tailor FFmpeg for your needs.
GPU Acceleration for FFmpeg: A Step-By-Step Guide with Nvidia GPU on AWS! Check the complete guide at Enable Harware Accelaration with FFmpeg
CPU vs GPU Benchmark for video Transcoding on AWS: Debunking the CPU-GPU myth! See for yourself at Challenging the Video Encoding Cost-Speed Myth
Crafting the Team for Sustainable Success: Are "Rockstars" the Ultimate Solution for a Thriving Team? Explore the insights at Beyond Rockstars
How "Builders" Transform Team's Productivity: Navigating Vision & Ideas to Reality! Discover more about Builders at Bridging Dreams and Reality
Mental Well-being in Tech: Cultivating a Healthier Workplace for Tech Professionals Explore insights: The Dark Side of High-Tech Success!
Freelancer to Full-Time: Understanding Corporate Reluctance. Discover insights at Why Businesses Hesitate to Employ Freelancers
Transitioning from Freelancer to Full-Time: Hurdles to Overcome! Check out Why Businesses Hesitate to Employ Freelancers
Scaling the Cloud: Discover the Best Scaling Strategy for Your Cloud Infrastructure at Vertical and Horizontal Scaling Strategies
The Future of Efficient Backend Development: Efficient Backend Development with Outerbase! Discover the details at Building a Robust Backend with Outerbase
Subscribe to my newsletter
Read articles from Mirza Bilal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Mirza Bilal
Mirza Bilal
Seasoned technology leader with 17+ years of experience. Driving startup to industry leader. Committed to technical excellence, innovation, and community engagement for the advancement of technology and education.