Day 44 of 90 Days of DevOps Challenge: Kicking Off Centralized Logging with EFK

Vaishnavi DVaishnavi D
3 min read

Yesterday, on Day 43 of my #90DaysOfDevOps journey, I integrated Alertmanager into my Kubernetes monitoring stack. With Prometheus collecting metrics, Grafana displaying dashboards, and Alertmanager sending out real-time alerts, my observability pipeline for metrics is now complete.

But observability is more than just metrics and alerts.

Logs provide essential, granular detail, what happened, when, and why and often become the first place we look when diagnosing issues.

So today, I'm stepping into the next dimension of observability: Centralized Logging.

What I’m Covering Today:

  • Why logs are essential for observability

  • Introduction to the EFK stack (Elasticsearch, Fluentd, Kibana)

  • How the stack fits together

  • High-level setup flow for Kubernetes

  • What’s next in this logging journey

Why Centralized Logging?

While Prometheus helps with "what's happening," logs tell you the story behind the scenes, the exact error message, stack trace, user action, or system output that led to an event.

Centralized logging ensures that:

  • You don’t have to jump into individual pods or servers to read logs.

  • Logs are persisted and searchable over time.

  • You can analyze patterns, anomalies, and failures across your system.

This is where the EFK stack shines.

What is the EFK Stack?

ComponentRole
ElasticsearchStores logs in a searchable format (NoSQL DB)
FluentdCollects, transforms, and ships logs from nodes/pods
KibanaVisualizes logs, enables search and filtering

Together, they create a complete log aggregation and visualization pipeline.

EFK in Kubernetes — High-Level Setup

I’ll be setting up EFK using Helm charts and Kubernetes manifests in the upcoming posts, but here’s today’s high-level overview of how the pieces fit together:

  1. Fluentd as DaemonSet:

    • Runs on every node

    • Reads logs from /var/log/containers/ and /var/log/pods/

    • Forwards them to Elasticsearch

  2. Elasticsearch Cluster:

    • Receives and indexes logs

    • Stores them for querying and analysis

  3. Kibana Service:

    • Connects to Elasticsearch

    • Provides a web UI to search, visualize, and filter logs

Here’s a basic architectural flow:

[ Kubernetes Pods ]
        ⬇
[ Container Logs (/var/log/containers) ]
        ⬇
[ Fluentd DaemonSet ]
        ⬇
[ Elasticsearch Cluster ]
        ⬇
[ Kibana Dashboard ]

Logging vs Metrics — Why Both Matter

FeatureMetrics (Prometheus)Logs (EFK)
FormatStructured, numericUnstructured text
Use CaseMonitoring, alertingDebugging, auditing
StorageTime-series DBDocument store (Elasticsearch)
Visual ToolGrafanaKibana

Logs provide context that metrics often cannot, such as detailed error traces, authentication failures, or configuration warnings.

Final Thoughts

Metrics, dashboards, and alerts gave me a pulse on my Kubernetes cluster.
Now, logs will give me the voice of the system.

Today marks the start of this exciting journey into centralized logging. I can’t wait to build full log observability into my stack.

Stay tuned for Day 45, where I’ll set up Fluentd and start collecting logs from my cluster.

1
Subscribe to my newsletter

Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vaishnavi D
Vaishnavi D