π Monitoring Logs from EC2 to Kubernetes using Filebeat and the ELK Stack β A Zero-Cost, Real-World Setup By JAY | Python Developer | Real-world DevO


In todayβs cloud-native world, log monitoring is more than just reading print()
statements β it's about visibility, debugging, performance tracking, and security. I recently had a real-world requirement: ship logs from a GPU-intensive Flask application running on an EC2 instance to an ELK stack fully deployed inside a Kubernetes cluster.
I had to do it with zero budget β no Datadog, no CloudWatch, no third-party paid toolkits. The answer? ELK Stack + Filebeat + Redis, all deployed with Helm and configured like a pro.
This post is not just a tutorial; itβs a story from a developer in the trenches. So whether you're new to observability or trying to connect your on-prem/VM/EC2 logs to a K8s ELK stack, this is for you.
π What is the ELK Stack?
Before we dive in, here's a quick TL;DR for newcomers:
Elasticsearch β A powerful search engine where logs are indexed and stored.
Logstash β A log pipeline tool that transforms and routes logs.
Kibana β A dashboarding and visualization tool to explore logs.
Together, they form the ELK Stack.
Now throw Filebeat into the mix β it acts as a lightweight log shipper from your machines to Logstash.
ELK is used in monitoring applications, infrastructure observability, security (SIEM), and more. If youβre building scalable systems, this is a must-have tool in your kit.
π‘ Use Case
Hereβs what I needed to solve:
Flask app running on EC2 (Ubuntu) with GPU workloads.
ELK stack running inside a Kubernetes cluster.
Logs from EC2 should appear in Kibana dashboards.
Entire setup should be zero-cost, i.e., using open-source tooling.
Architecture :
EC2 (Ubuntu) βββ Filebeat (log shipper) |ββ Redis (queue buffer, TCP) β deployed remotely|β Logstash (K8s) β Elasticsearch(K8s) β Kibana(K8s)
π Step-by-Step Setup:
π© 1. Deploy ELK Stack inside Kubernetes (using Helm)
First, set up the ELK stack in your Kubernetes cluster (minikube, EKS, or any free cluster).
Install Helm if not already installed :
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Add Helm repo :
helm repo add elastic https://helm.elastic.co
helm repo update
- If you do not have elk already setup do it using :
elasticsearch :
helm install elasticsearch elastic/elasticsearch -n logging --create-namespace
kibana :
helm install kibana elastic/kibana -n logging
logstash :
helm install logstash elastic/logstash -n logging
After installing logstash we have to update the logstash pipeline or using a custom values.yaml :
logstashPipeline:
logstash.conf: |
input {
redis {
host => "redis.logging.svc.cluster.local OR YOUR_REDIS_IP"
port => 6379
data_type => "list"
key => "logstash"
}
}
filter{
json {
source => "message"
skip_on_invalid_json => true
}
}
output {
elasticsearch {
hosts => ["
http://elasticsearch-master.logging.svc.cluster.local:9200
"OR YOUR_ELASTICSEARCH_IP]
index => "ec2-logs-%{+
YYYY.MM
.dd}"
}
}
Deploy Logstash :
helm install logstash elastic/logstash -n logging -f values.yaml
π¦ 2. Install Filebeat on EC2 :
SSH into your ec2 instance / your virtual machine :
run command :
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.13-amd64.deb
sudo dpkg -i filebeat-7.17.13-amd64.deb
Update the filebeat.yml file usually at :
nano /etc/filebeat/filebeat.ymlUpdate with :
filebeat.inputs:
type: log
enabled: true
paths:
/home/ubuntu/myapp/logs/*.log
output.redis:
hosts: ["<REDIS_K8S_IP>:6379"OR YOUR REDIS_IP]
key: "logstash" #Make sure this is the same key you will be using in logstash
db: 0
timeout: 5
datatype: "list"
Enable Filebeat :
run command :
sudo filebeat modules enable system
sudo filebeat setup
sudo systemctl enable filebeat
sudo systemctl start filebeat
π¨ 3. View the logs on Kibana :
Follow the below steps to view your logs :
Go to your Kibana Dashboard
On the left sidebar β Stack Management
In the Stack Management β Create Index
Create Index β Our index will be visible here like : index: ec2-logs-* , select that and use @timestamp
On the left sidepanel Go to Discover Tab , Our logs will be visible at the above index pattern.
π§ Final Thoughts
I loved working on this setup because:
It was real, not a Hello World.
I learned about cross-platform log shipping.
I made it work 100% free, using Filebeat + Redis as a clean buffer.
If you're building hybrid systems β some on VMs, some in Kubernetes β this approach is not just scalable but also community-approved. π―
π¬ Let's Connect
If this helped you:
Reach out on Linkedin
Let's talk about DevOps, Python, or monitoring.
I'm always open to share ideas, collaborate, and learn together.
#PythonDev #ELKStack #Kubernetes #Filebeat #Monitoring #ZeroCostDevOps #Redis #LogShipping#elkstack#filebeat#kubernetes#redis#devops#monitoring#logging#opensource#python#ec2
#infrastructure#logstash#elasticsearch#kibana
Subscribe to my newsletter
Read articles from Jay Deshmukh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Jay Deshmukh
Jay Deshmukh
π» Backend Developer | π Python & Flask | π¦ MongoDB & ELK Building real-world solutions with logs, APIs, and cloud tools β one side project at a time. Passionate about monitoring, scalable infra, and clean code. I write to share what I learn, fix, and break β especially when no budget is involved. βοΈ Exploring DevOps | π Learning in public | π€ Open to collaboration