Integrating Kubernetes with ELK Stack: A Self-Managed Approach - Part 1
In today's cloud-native world, effective log management is crucial for maintaining and troubleshooting applications. This blog will guide you through the process of integrating a Kubernetes cluster with the ELK (Elasticsearch, Logstash, Kibana) stack, providing a robust logging solution for your containerized applications.
Prerequisites
Before we begin, ensure you have the following:
A running Kubernetes cluster (version 1.19+)
kubectl CLI configured to interact with your cluster
Helm 3 installed
Basic understanding of Kubernetes concepts
Step 1: Setting Up Namespace
First, let's create a dedicated namespace for our logging stack:
kubectl create namespace logging
Step 2: Deploying Elasticsearch
We'll use Helm to deploy Elasticsearch. Add the Elastic Helm repository and update it:
helm repo add elastic https://helm.elastic.co
helm repo update
Now, create a values file named elasticsearch-values.yaml
:
replicas: 3
minimumMasterNodes: 2
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
Deploy Elasticsearch:
helm install elasticsearch elastic/elasticsearch -f elasticsearch-values.yaml -n logging
Verify the deployment:
kubectl get pods -n logging | grep elasticsearch
Step 3: Deploying Kibana
Create a kibana-values.yaml
file:
elasticsearchHosts: "http://elasticsearch-master:9200"
resources:
requests:
cpu: "100m"
memory: "500Mi"
limits:
cpu: "500m"
memory: "1Gi"
Deploy Kibana:
helm install kibana elastic/kibana -f kibana-values.yaml -n logging
Verify the deployment:
kubectl get pods -n logging | grep kibana
Step 4: Deploying Logstash
Create a logstash-values.yaml
file:
logstashConfig:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.elasticsearch.hosts: ["http://elasticsearch-master:9200"]
pipelines.yml: |
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch-master:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
resources:
requests:
cpu: "100m"
memory: "500Mi"
limits:
cpu: "500m"
memory: "1Gi"
Deploy Logstash:
helm install logstash elastic/logstash -f logstash-values.yaml -n logging
Verify the deployment:
kubectl get pods -n logging | grep logstash
Step 5: Deploying Filebeat
Filebeat will collect logs from all pods in the cluster. Create a filebeat-values.yaml
file:
daemonset:
enabled: true
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["logstash-logstash:5044"]
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "300m"
memory: "200Mi"
Deploy Filebeat:
helm install filebeat elastic/filebeat -f filebeat-values.yaml -n logging
Verify the deployment:
kubectl get pods -n logging | grep filebeat
Step 6: Configuring Ingress (Optional)
If you want to access Kibana from outside the cluster, you can set up an Ingress. First, ensure you have an Ingress controller installed in your cluster.
Create a file named kibana-ingress.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
namespace: logging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: kibana.test.com # mention your domain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kibana
port:
number: 5601
Apply the Ingress:
kubectl apply -f kibana-ingress.yaml
Step 7: Testing the Setup
To test our logging pipeline, let's deploy a sample application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Save this as nginx-test.yaml
and apply it:
kubectl apply -f nginx-test.yaml
Step 8: Viewing Logs in Kibana
Access Kibana through your Ingress URL or by port-forwarding:
kubectl port-forward service/kibana-kibana 5601 -n logging
Open a web browser and go to
http://localhost:5601
In Kibana, go to "Management" > "Stack Management" > "Index Patterns"
Create a new index pattern. You should see indices like
filebeat-*
Go to "Discover" in the main menu and select your index pattern
You should now see logs from your Kubernetes cluster, including the nginx-test deployment
Step 9: Setting Up Monitoring (Optional)
To monitor the health of your ELK stack, you can use Prometheus and Grafana:
Install Prometheus using Helm:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
Configure Elasticsearch to expose metrics: Add the following to your
elasticsearch-values.yaml
:http: service: headless: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9114"
Update your Elasticsearch deployment:
helm upgrade elasticsearch elastic/elasticsearch -f elasticsearch-values.yaml -n logging
Access Grafana (installed with Prometheus) and import Elasticsearch dashboards
Conclusion
You now have a fully integrated ELK stack running on your Kubernetes cluster. This setup provides a centralized logging solution that can handle logs from all your applications and system components.
Remember to:
Regularly update your ELK components
Monitor the resource usage of your logging components
Implement log rotation and retention policies to manage storage
Secure your ELK stack by implementing proper authentication and encryption
By following this guide, you've set up a robust logging infrastructure that will help you maintain and troubleshoot your Kubernetes applications more effectively.
Subscribe to my newsletter
Read articles from VISHNUVARDHAN VANDAVASI directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by