πŸš€ Monitoring Logs from EC2 to Kubernetes using Filebeat and the ELK Stack β€” A Zero-Cost, Real-World Setup By JAY | Python Developer | Real-world DevO

Jay DeshmukhJay Deshmukh
4 min read

In today’s cloud-native world, log monitoring is more than just reading print() statements β€” it's about visibility, debugging, performance tracking, and security. I recently had a real-world requirement: ship logs from a GPU-intensive Flask application running on an EC2 instance to an ELK stack fully deployed inside a Kubernetes cluster.

I had to do it with zero budget β€” no Datadog, no CloudWatch, no third-party paid toolkits. The answer? ELK Stack + Filebeat + Redis, all deployed with Helm and configured like a pro.

This post is not just a tutorial; it’s a story from a developer in the trenches. So whether you're new to observability or trying to connect your on-prem/VM/EC2 logs to a K8s ELK stack, this is for you.

πŸ” What is the ELK Stack?

Before we dive in, here's a quick TL;DR for newcomers:

  • Elasticsearch – A powerful search engine where logs are indexed and stored.

  • Logstash – A log pipeline tool that transforms and routes logs.

  • Kibana – A dashboarding and visualization tool to explore logs.

  • Together, they form the ELK Stack.

Now throw Filebeat into the mix β€” it acts as a lightweight log shipper from your machines to Logstash.

ELK is used in monitoring applications, infrastructure observability, security (SIEM), and more. If you’re building scalable systems, this is a must-have tool in your kit.


πŸ’‘ Use Case

Here’s what I needed to solve:

  • Flask app running on EC2 (Ubuntu) with GPU workloads.

  • ELK stack running inside a Kubernetes cluster.

  • Logs from EC2 should appear in Kibana dashboards.

  • Entire setup should be zero-cost, i.e., using open-source tooling.

Architecture :

EC2 (Ubuntu) β”œβ”€β”€ Filebeat (log shipper) |── Redis (queue buffer, TCP) ← deployed remotely|─ Logstash (K8s) β†’ Elasticsearch(K8s) β†’ Kibana(K8s)

πŸš€ Step-by-Step Setup:

🟩 1. Deploy ELK Stack inside Kubernetes (using Helm)

First, set up the ELK stack in your Kubernetes cluster (minikube, EKS, or any free cluster).

helm repo add elastic https://helm.elastic.co

helm repo update

  • If you do not have elk already setup do it using :

elasticsearch :
helm install elasticsearch elastic/elasticsearch -n logging --create-namespace

kibana :

helm install kibana elastic/kibana -n logging

logstash :
helm install logstash elastic/logstash -n logging

  • After installing logstash we have to update the logstash pipeline or using a custom values.yaml :
    logstashPipeline:

    logstash.conf: |

    input {

    redis {

    host => "redis.logging.svc.cluster.local OR YOUR_REDIS_IP"

    port => 6379

    data_type => "list"

    key => "logstash"

    }

    }

    filter{

    json {

    source => "message"

    skip_on_invalid_json => true

    }

    }

    output {

    elasticsearch {

    hosts => ["http://elasticsearch-master.logging.svc.cluster.local:9200"OR YOUR_ELASTICSEARCH_IP]

    index => "ec2-logs-%{+YYYY.MM.dd}"

    }

    }

  • Deploy Logstash :

    helm install logstash elastic/logstash -n logging -f values.yaml

🟦 2. Install Filebeat on EC2 :

  • SSH into your ec2 instance / your virtual machine :

    run command :

    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.13-amd64.deb

    sudo dpkg -i filebeat-7.17.13-amd64.deb

  • Update the filebeat.yml file usually at :
    nano /etc/filebeat/filebeat.yml

    Update with :
    filebeat.inputs:

    type: log

    enabled: true

    paths:

    /home/ubuntu/myapp/logs/*.log

    output.redis:

    hosts: ["<REDIS_K8S_IP>:6379"OR YOUR REDIS_IP]

    key: "logstash" #Make sure this is the same key you will be using in logstash

    db: 0

    timeout: 5

    datatype: "list"

  • Enable Filebeat :

    run command :

    sudo filebeat modules enable system

    sudo filebeat setup

    sudo systemctl enable filebeat

    sudo systemctl start filebeat

🟨 3. View the logs on Kibana :

Follow the below steps to view your logs :

  1. Go to your Kibana Dashboard

  2. On the left sidebar β†’ Stack Management

  3. In the Stack Management β†’ Create Index

  4. Create Index β†’ Our index will be visible here like : index: ec2-logs-* , select that and use @timestamp

  5. On the left sidepanel Go to Discover Tab , Our logs will be visible at the above index pattern.

🧠 Final Thoughts

I loved working on this setup because:

  • It was real, not a Hello World.

  • I learned about cross-platform log shipping.

  • I made it work 100% free, using Filebeat + Redis as a clean buffer.

If you're building hybrid systems β€” some on VMs, some in Kubernetes β€” this approach is not just scalable but also community-approved. 🎯

πŸ“¬ Let's Connect

If this helped you:

  • Reach out on Linkedin

  • Let's talk about DevOps, Python, or monitoring.

  • I'm always open to share ideas, collaborate, and learn together.

#PythonDev #ELKStack #Kubernetes #Filebeat #Monitoring #ZeroCostDevOps #Redis #LogShipping#elkstack#filebeat#kubernetes#redis#devops#monitoring#logging#opensource#python#ec2

#infrastructure#logstash#elasticsearch#kibana

0
Subscribe to my newsletter

Read articles from Jay Deshmukh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jay Deshmukh
Jay Deshmukh

πŸ’» Backend Developer | 🐍 Python & Flask | πŸ“¦ MongoDB & ELK Building real-world solutions with logs, APIs, and cloud tools β€” one side project at a time. Passionate about monitoring, scalable infra, and clean code. I write to share what I learn, fix, and break β€” especially when no budget is involved. βš™οΈ Exploring DevOps | πŸš€ Learning in public | 🀝 Open to collaboration