💡Simplifying Kubernetes Log Management with OpenSearch on Aiven

🚀Introduction
In a real-world Kubernetes environment, monitoring logs from multiple pods and services becomes overwhelming, especially when there is a high traffic over application. As a DevOps engineer, we often face the challenge:
“How do I quickly figure out which pod crashed or why the application failed?”
Traditional methods like kubectl logs
work — but only up to a point. When dealing with large clusters and multi-node setups we need a centralized observability solution. That’s where OpenSearch Dashboards comes into play — a powerful, open-source log analytics and visualization tool. In this blog, I’ll walk you through:
The real problem we face with scattered logs.
How OpenSearch helps in solving this problem.
How I connected a running Kubernetes cluster to OpenSearch using Aiven.
Exploring Open Search Dashboard to visualize logs
🧩The Problem: Observability Gaps in Real Clusters
Assume that we have many microservices which are running as the Pods. When an application goes down you:
Try checking individual pod logs manually in cluster.
Hope the logs haven’t rotated.
Spend more time to find where exactly things went wrong.
👇 Pain Points:
No central place to search logs across namespaces or containers.
Debugging takes time, especially in multi-node clusters.
Missing logs from terminated pods.
Difficult to identify trends or patterns in failures.
This is a real-time production issue. Our goal is to simplify this with OpenSearch.
🔍 What is OpenSearch?
OpenSearch is an open-source search and analytics engine — originally derived from Elasticsearch. With OpenSearch Dashboards, you can visualize logs, metrics, and traces all in one place.
✅ Why OpenSearch?
Free and open-source
Fast full-text search
Powerful visual dashboards
Easy log filtering with queries
Ideal for DevOps Troubleshooting
🔧 What is Aiven?
Aiven is a developer platform that allows you to spin up services like OpenSearch, Grafana, Redis, etc., in seconds — without needing full infrastructure setups.
✅ Why Aiven?
Offers prebuilt observability services.
One-click deployment of OpenSearch.
Easy host/port access to connect to my kubernetes cluster.
Ideal for rapid testing and small-scale setups
🛠 Tools Used
Tool | Purpose |
KillerKoda | Kubernetes sandbox cluster |
Aiven | For Creating OpenSearch dashboard instance |
OpenSearch Dashboards | View & query logs |
☁️ Step 1: Set Up an Aiven Account & Project
To get started:
Go to Aiven.io and sign up.
After verifying your email, log in to the console.
From the top navigation, create a project to group your services.
Fig 1: Creating a Project in Aiven
Projects help in logically organizing environments such as staging, dev, or prod.
⚙️ Step 2: Deploy OpenSearch in Aiven
Now, let’s create a managed OpenSearch service.
Click "Create a Service" in your project.
Choose:
Service:
OpenSearch
Cloud provider: (e.g. AWS, GCP)
Region: Closest to you
Service plan: Start with the smallest (
startup-4
or similar)
Fig 2: Choose a cloud service provider
Fig 3: OpenSearch Deployment Settings
3.Click on Create Service.
Aiven will provision and initialize your OpenSearch instance. This takes around 2–5 minutes.
🔑 Step 3: Get Connection Details
Once the service is ready, go to the Service Overview tab.
There, you’ll find:
Hostname: For connecting OpenSearch API or clients followed by port Number as well.
Username/Password: Auto-generated and secure
Fig 4: Aiven OpenSearch Credentials & Endpoints
🚀 Step 4: Simulate a Kubernetes Cluster in KillerKoda
If you don’t have access to a cloud K8s cluster, KillerKoda is a great place to experiment and test log forwarding quickly.
Follow these steps to simulate logs and push them to OpenSearch:
✅ 1. Start Your Playground
Go to KillerKoda's Kubernetes Playground
Start a new session and wait for the cluster node to be ready
✅ 2. Create a Logging Namespace
Inside your playground terminal:
kubectl create ns logging
This will isolate all logging-related resources.
✅ 3. Deploy a Log Generator Pod Using busy box Image for testing
Create a file log-generator-deployment.yaml
and apply it:
yamlCopyEditapiVersion: apps/v1
kind: Deployment
metadata:
name: log-generator
namespace: logging
labels:
app: log-generator
spec:
replicas: 1
selector:
matchLabels:
app: log-generator
template:
metadata:
labels:
app: log-generator
spec:
containers:
- name: log-generator
image: busybox
command: ["/bin/sh", "-c"]
args:
- |
while true; do
echo "$(date) INFO: Service is alive";
echo "$(date) WARN: Latency approaching threshold";
echo "$(date) ERROR: Failed to reach database!";
sleep 4;
done
Apply it:
kubectl apply -f log-generator-deployment.yaml
✅ 4. Apply Fluent-Bit ConfigMap
Fluent Bit reads the container logs and forwards them to Aiven’s OpenSearch.
Create the ConfigMap with name fluent-bit-config.yaml
.
# fluent-bit-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: logging
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File parsers.conf
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Merge_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[OUTPUT]
Name opensearch
Match *
Host <replace with your Aiven instance hostname>
Port <replace with your port>
Index fluentbit
HTTP_User <replace with your username>
HTTP_Passwd <replace with your password>
TLS On
TLS.verify Off
Suppress_Type_Name On
Include_Tag_Key On
Logstash_Format On
Logstash_Prefix kubernetes
Replace_Dots On
Retry_Limit False
# Add these parameters for OpenSearch compatibility
Write_Operation create
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log
Make sure to replace the placeholders in the config with your actual Aiven OpenSearch host, port, username, and password. The input path /var/log/containers/*.log
tells the system to read logs from all container log files on the node, capturing stdout/stderr output from running containers.
Apply it:
kubectl apply -f fluent-bit-config.yaml -n logging
✅ 5. Deploy Fluent-Bit As Daemon Set in same namespace
Create the Daemon Set with name fluent-bit-daemonset.yaml
. Mount the fluent-bit-config
ConfigMap using a volume at this path /fluent-bit/etc/
.
# fluent-bit-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
labels:
app: fluent-bit
spec:
selector:
matchLabels:
app: fluent-bit
template:
metadata:
labels:
app: fluent-bit
spec:
serviceAccountName: fluent-bit
containers:
- name: fluent-bit
image: fluent/fluent-bit:3.0.4
imagePullPolicy: Always
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config
mountPath: /fluent-bit/etc/
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluent-bit-config
Apply it:
kubectl apply -f fluent-bit-daemonset.yaml -n logging
📊 Exploring Logs in Aiven OpenSearch Dashboard
Once Fluent Bit is forwarding logs successfully, follow these steps to view and analyze them in the Aiven OpenSearch dashboard:
🧭 Step 1: Navigate to Dashboard
Open your Aiven OpenSearch Dashboard URL.
Login if prompted.
🔍 Step 2: Create Index Pattern
Click on ☰ Discover in the left menu.
Navigate to Stack Management → Index Patterns.
Click on "Create Index Pattern".
Use the pattern name:
kubernetes.*
(This must match the index name configured in Config Map of Fluent Bit's
input
section ).
Fig 5: Index Pattern Creation Panel
Fig 6: Adding Time field for our Index Pattern
5.Select a time field (usually @timestamp
) and create.
🔎 Step 3: Discover and Search Logs
Go back to the Discover section under OpenSearch Dashboards on left side menu
Select your index (
kubernetes.*
) from the dropdown.In the search bar, try queries like:
"failed to reach database"
Fig 7: Logs Search Result for ‘failed to reach database’
This allows us to quickly identify how frequently the issue occurred and which pods generated the logs—eliminating the need to manually check across the cluster. Instead, we can view everything directly on the dashboards, significantly reducing troubleshooting time.
🚀 Additional Integrations
Beyond centralized logging, you can integrate additional observability tools to extend functionality. Incorporating Grafana allows for visually rich dashboards using OpenSearch data. To capture distributed traces, you can plug in Open Telemetry or Jaeger. Additionally, alerts can be routed to collaboration tools like Slack, Microsoft Teams or email to notify teams of critical issues in real-time.
🧾 Conclusion
By forwarding Kubernetes logs using Fluent Bit and analyzing them through the Aiven OpenSearch dashboard, developers and platform teams can gain powerful insights into workload behavior. This setup helps quickly pinpoint issues such as application errors or connectivity problems by filtering logs by index patterns. The system provides a scalable and extensible foundation for full-stack observability, improving debugging efficiency and reducing mean time to resolution.
📚 Reference
This blog is inspired by the excellent work in the observability-zero-to-hero GitHub repository by @iam-veeramalla. It's a great resource if you're diving into observability tools and practices.
🤝 Let’s Connect & Collaborate
If you enjoyed this post and want to dive deeper into DevOps practices, observability, or automation workflows — I’d love to connect!
I'm always open to sharing ideas, learning together, and writing more about DevOps tools and real-world implementations. Feel free to drop a comment, start a discussion, or reach out — let’s keep building better workflows, one tool at a time.
Subscribe to my newsletter
Read articles from Thirumalesh Reddy directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
