Day 46 of 90 Days of DevOps Challange: Completing the EFK Stack with Kibana


Yesterday, I deployed Fluentd as a DaemonSet in my Kubernetes cluster to collect logs from all nodes and forward them to Elasticsearch. This marked a major step in setting up centralized log aggregation in my cluster.
Today, I completed the final piece of the EFK stack by deploying Kibana, the visual interface that transforms log data into actionable insights!
What is Kibana?
Kibana is a powerful open-source data visualization tool that works natively with Elasticsearch. It allows users to explore, visualize, and analyze log and event data in real time. With Kibana, logs collected by Fluentd and indexed in Elasticsearch can now be searched, filtered, and visualized using interactive dashboards.
Kibana Setup in Kubernetes
Let’s walk through how I deployed Kibana and connected it to my Elasticsearch instance.
Step 1: Deploy Kibana Service & Deployment
# kibana-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:8.5.0
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch.logging.svc.cluster.local:9200"
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 80
targetPort: 5601
type: NodePort
Apply it:
kubectl apply -f kibana-deployment.yaml
Step 2: Access Kibana
Once deployed, I accessed Kibana using the exposed NodePort service.
If using minikube:
minikube service kibana -n logging
If using a cloud-managed cluster:
Expose it via LoadBalancer or Ingress
Navigate to
http://<node-ip>:<nodeport>
Exploring Logs with Kibana
After logging into the Kibana dashboard:
I added a new Index Pattern matching Fluentd’s output:
logstash-*
I selected
@timestamp
as the time field.I navigated to Discover, where I could instantly search, filter, and view logs from all Kubernetes pods.
Building Visual Dashboards
With logs flowing into Kibana, I created:
Pie charts of log levels (INFO, ERROR, WARN)
Time-series graphs of log volume
Data tables of most common error messages
Filters by namespace, pod name, container, etc.
These dashboards turned raw logs into visual intelligence.
EFK Stack Recap
Component | Role |
Elasticsearch | Log storage & indexing engine |
Fluentd | Log collection, transformation, and routing |
Kibana | Data exploration and visualization |
Why EFK Matters for DevOps
The EFK stack brings observability full circle by turning raw, unstructured logs into structured, searchable, and actionable insights. It:
Speeds up debugging and RCA
Detects anomalies faster
Helps correlate logs with metrics and alerts
Empowers DevOps and SRE teams to be proactive
Final Thoughts
With Kibana in place, I’ve officially completed the EFK stack setup in my Kubernetes cluster. This stack empowers me to collect, store, and visualize logs from all running containers, giving me a single pane of glass to explore the health, performance, and behavior of my applications.
The EFK stack, combined with Prometheus and Grafana for metrics and Alertmanager for notifications, gives me a comprehensive observability solution. I now have a firm grip on what's happening inside my cluster from resource usage to application errors in real time. But observability is only one half of the DevOps equation.
Tomorrow, on Day 47, I’ll take a leap into the world of CI/CD with Jenkins, where I’ll automate how applications are built, tested, and deployed into this monitored environment.
Stay tuned, the journey from visibility to velocity begins now!
Subscribe to my newsletter
Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
