Running ELK Stack in a Minikube Kubernetes Environment: The Ultimate Guide
Overview of ELK Stack (Elasticsearch, Logstash, Kibana)
The ELK Stack is a popular set of tools used for searching, analyzing, and visualizing log data in real time. It is widely used in logging, monitoring, and observability use cases.
Elasticsearch: A distributed search and analytics engine. It is designed to index, store, and search large volumes of data quickly. Elasticsearch is the backbone of the ELK Stack and is typically used to store logs, metrics, and other time-series data. It supports full-text search, filtering, and aggregation of data.
Logstash: A powerful pipeline for collecting, processing, and transforming log data before sending it to Elasticsearch. It can handle a variety of data sources (e.g., logs, metrics, events) and format them for easy indexing and searching in Elasticsearch. Logstash supports input, filter, and output plugins for versatile data handling.
Kibana: A data visualization tool that works with Elasticsearch. It provides a web interface to explore, visualize, and analyze the data stored in Elasticsearch. Kibana is often used for creating dashboards, viewing logs, and making sense of large volumes of data in real-time.
Use cases and why run ELK on Minikube for local development/testing
Running ELK Stack on Minikube, a local Kubernetes cluster, is beneficial in various scenarios, particularly for developers and teams looking to test and develop in an isolated environment.
Use Cases for ELK Stack:
Log Management: ELK is commonly used for aggregating and managing logs from various sources. It allows you to search, filter, and analyze logs in real time, which is invaluable for debugging and monitoring applications.
Security Monitoring: With ELK, you can track security-related events, analyze patterns, and quickly respond to incidents. Logstash can parse security logs and Elasticsearch can store them for easy retrieval and analysis.
Application Performance Monitoring: By feeding application logs, metrics, and system performance data into ELK, you can create visualizations and dashboards that provide insight into application health and performance.
Infrastructure Monitoring: ELK can be used to monitor servers, containers, or cloud services. It aggregates system logs and metrics, allowing users to track the status of their infrastructure.
Why Use Minikube for ELK Stack Setup:
Cost-Effective: Minikube allows you to spin up a local Kubernetes environment on your laptop or workstation, which is ideal for testing and development purposes. It’s free and requires fewer resources than setting up a full Kubernetes cluster in the cloud.
Portability: Minikube provides a consistent environment to deploy and test ELK Stack locally, which is especially useful when you want to develop and debug without the overhead of managing cloud resources.
Realistic Kubernetes Testing: Minikube simulates a real Kubernetes environment, so you can develop and test Kubernetes-based configurations and deployments before scaling to a larger, production-level Kubernetes cluster.
Rapid Iteration: With Minikube, you can quickly deploy and tear down your ELK Stack setups. It’s perfect for developers who need to test changes, configurations, or code updates in an isolated, local Kubernetes environment without worrying about cloud infrastructure costs or complexities.
Prerequisites for Setting Up Minikube and ELK Stack
Before getting started with running the ELK stack on Minikube, ensure the following tools are installed and properly configured:
Note: This setup will be demonstrated on Ubuntu 22.04. However, the guide provides a high-level overview, which can be adapted to other operating systems as well.
Docker
Docker is required for running containers locally. Minikube uses Docker to manage the Kubernetes cluster and containerized applications.
kubectl
kubectl
is the command-line tool for interacting with Kubernetes clusters. Minikube creates a local Kubernetes cluster, andkubectl
is used to deploy applications, inspect cluster resources, and manage configurations.Minikube
Minikube is a local Kubernetes cluster that runs on your machine. It allows you to easily create and manage Kubernetes clusters for testing and development.
Basic Kubernetes Knowledge
Familiarity with basic Kubernetes concepts (pods, services, deployments, namespaces) is recommended to understand how applications are deployed and managed in Minikube.
Setting up the environment
Environment Preparation
Update System Packages
It's essential to ensure all packages are up-to-date for security and compatibility.
sudo apt update
Install Docker and Dependencies
Docker is required to run containers. This installs Docker and its dependencies.
# Install dependencies for Docker sudo apt install -y apt-transport-https ca-certificates curl # Install Docker sudo apt install -y docker.io # Enable and start Docker sudo systemctl enable docker sudo systemctl start docker
Install Minikube for Kubernetes
Minikube allows you to run a local Kubernetes cluster on your machine.
Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube
Start Minikube with Docker as the Driver
This command initializes Minikube with Docker as its driver, so the cluster runs within Docker.
minikube start --driver=docker
Verify Minikube Installation
Ensures Minikube started successfully.
minikube status
Optionally, you can specify resource limits if your machine has sufficient resources:
minikube start --cpus=4 --memory=8192 --driver=docker
Install and Configure kubectl
kubectl is a command-line tool that interacts with Kubernetes clusters. Install it as follows:
sudo snap install kubectl --classic
Deploying Elasticsearch Cluster on Kubernetes
Install ECK (Elastic Cloud on Kubernetes) CRDs and Operator
This step deploys necessary Custom Resource Definitions (CRDs) and the Elastic Operator, which manages Elasticsearch and Kibana resources in Kubernetes.
# Install CRDs
kubectl create -f https://download.elastic.co/downloads/eck/2.14.0/crds.yaml
# Install Operator
kubectl apply -f https://download.elastic.co/downloads/eck/2.14.0/operator.yaml
You can monitor the operator logs with:
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
Deploy Elasticsearch Cluster
Use a Kubernetes manifest to define and deploy a single-node Elasticsearch cluster.
cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.15.3
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
EOF
After deployment, you can verify the cluster health check status and view the running pods
kubectl get elasticsearch
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
To view real-time logs from the Elasticsearch pod, use the following command
kubectl logs -f quickstart-es-default-0
Check the quickstart-es-http
service to retrieve its details, which will be used to access Elasticsearch.
kubectl get service quickstart-es-http
The following command retrieves the password for the default elastic
user by decoding it from the Kubernetes secret:
PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
Set up port forwarding to access the Elasticsearch cluster locally.
kubectl port-forward service/quickstart-es-http 9200
Use curl
to connect and check that Elasticsearch is running:
curl -u "elastic:$PASSWORD" -k "https://localhost:9200"
You should see this output in your terminal:
{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"cluster_uuid" : "wwna2OsPRxqh8W0wGqv7QA",
"version" : {
"number" : "8.15.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "f97532e680b555c3a05e73a74c28afb666923018",
"build_date" : "2024-10-09T22:08:00.328917561Z",
"build_snapshot" : false,
"lucene_version" : "9.11.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Deploy Kibana
This deploys Kibana, which will connect to the Elasticsearch instance you just created.
cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.15.3
count: 1
elasticsearchRef:
name: quickstart
EOF
Set up port forwarding to access the Kibana dashboard locally:
kubectl port-forward service/quickstart-kb-http 5601
Open https://localhost:5601
in your browser to access Kibana's interface.
You should see this in your browser:
Automate Elasticsearch and Kibana Port Forwarding
To streamline access to Elasticsearch and Kibana, you can automate port forwarding using shell scripts. These scripts make it easy to establish connections without repeatedly typing commands.
Create a Port Forwarding Script for Elasticsearch
Create the Script File
Open a new file withvim
(or your preferred editor) to store the port forwarding command for Elasticsearch:vim elastic-service.sh
Add Port Forwarding Command
In the editor, add the following command, which will forward the Elasticsearch service port 9200 to your local machine:kubectl port-forward service/quickstart-es-http 9200
Save and Make the Script Executable
Save the file, exit the editor, and then make the script executable:chmod +x elastic-service.sh
Now, you can simply run ./elastic-service.sh
to initiate port forwarding to Elasticsearch on port 9200.
Create a Port Forwarding Script for Kibana
Create the Script File
Open a new file for the Kibana port forwarding script:vim kibana-service.sh
Add Port Forwarding Command
Add the following line to forward Kibana’s port 5601:kubectl port-forward service/quickstart-kb-http 5601
Save and Make the Script Executable
Save the file, close the editor, and make this script executable:chmod +x kibana-service.sh
Run ./kibana-service.sh
to set up port forwarding to Kibana on port 5601 quickly.
These scripts simplify accessing the ELK Stack on your Minikube Kubernetes environment by letting you start port forwarding with a single command for each service.
Setting up the ELK Stack on Minikube provides an excellent environment for local development and testing without the need for cloud resources. By following these steps, you now have a fully functional ELK Stack running on your local Kubernetes cluster, allowing you to dive into log management, monitoring, and analytics workflows directly on your machine.
In the next part of this series, we’ll cover how to build an ETL (Extract, Transform, Load) pipeline using Logstash to fetch and index data from MySQL into Elasticsearch. This will allow you to bring structured data from relational databases into the ELK Stack, unlocking powerful visualization and search capabilities in Kibana. Stay tuned for a deeper dive into integrating Logstash for seamless data transformation and analysis.
Extras
Troubleshooting
Optional: Add User to Docker Group:
If you don’t want to use sudo every time, you can add your user to the Docker group so it has the necessary permissions to access the Docker daemon:
sudo usermod -aG docker $USER
To apply the changes from the usermod
command (which added your user to the Docker group), you need to log out of your system and log back in.
Alternatively, you can restart your terminal or run the following command to refresh your group membership without needing to log out:
newgrp docker
After logging back in or running the above command, check again to see if docker
is listed in your groups:
groups
You should see docker
listed among the other groups. For example:
shobhit adm cdrom sudo dip plugdev lpadmin lxd sambashare docker
References
Subscribe to my newsletter
Read articles from Shobhit Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Shobhit Sharma
Shobhit Sharma
A developer crafting code and sharing insightful perspectives