Logging and Monitoring in Kubernetes with Metric Server
🗼Introduction
Logging and monitoring are crucial aspects of managing Kubernetes clusters effectively. They help ensure that applications are running smoothly, performance issues are quickly identified, and resources are optimally utilized. In this blog, we'll explore logging and monitoring in Kubernetes, focusing on the Metric Server.
🗼What is the Metric Server?
The Metric Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. It collects resource usage data from the Kubelet and exposes it through the Kubernetes API server. These metrics can then be used by various Kubernetes components, such as the Horizontal Pod Autoscaler (HPA) and the Kubernetes dashboard.
🗼Why Use Metric Server?
Autoscaling: It provides resource metrics that are essential for the Horizontal Pod Autoscaler to automatically scale the number of pod replicas based on current load.
Resource Monitoring: Offers insights into CPU and memory usage of nodes and pods, helping in resource planning and optimization.
Integration with Kubernetes Dashboard: Enhances the dashboard by providing real-time metrics, making it easier to monitor the health and performance of the cluster.
🗼Setting Up Metric Server
Prerequisites
A running Kubernetes cluster (v1.8 or higher).
kubectl
command-line tool configured to communicate with your cluster.
Installation
You can deploy the Metric Server using a YAML manifest provided by the Kubernetes community. Run the following command to deploy it:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Verify the installation:
kubectl get deployment metrics-server -n kube-system
🗼Using Metric Server for Monitoring
Once installed, the Metric Server starts collecting and exposing metrics. You can use kubectl top
commands to view these metrics.
Viewing Node Metrics
To see CPU and memory usage of nodes:
kubectl top nodes
Example output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-1 250m 12% 1024Mi 45%
node-2 100m 5% 512Mi 23%
Viewing Pod Metrics
To see CPU and memory usage of pods:
kubectl top pods
Example output:
NAME CPU(cores) MEMORY(bytes)
nginx-deployment-6d8c9bdf67-7bb9z 1m 2Mi
nginx-deployment-6d8c9bdf67-8fgh2 1m 2Mi
🗼Logging in Kubernetes
Logging is another critical aspect of cluster management. Kubernetes does not provide a built-in logging solution, but it supports various logging architectures.
Basic Logging with kubectl logs
You can fetch logs from a specific pod using:
kubectl logs <pod_name>
For example:
kubectl logs nginx-deployment-6d8c9bdf67-7bb9z
🗼Conclusion
Monitoring and logging are essential for maintaining the health and performance of your Kubernetes clusters. The Metric Server provides a lightweight and efficient way to collect resource metrics, which can be used for autoscaling and monitoring. For logging, the EFK stack offers a robust solution for centralized log management. Implementing these tools will help you gain better insights into your cluster's behavior, ensuring smooth and efficient operations.
Subscribe to my newsletter
Read articles from Ashutosh Mahajan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Ashutosh Mahajan
Ashutosh Mahajan
Proficient in variety of DevOps technologies, including AWS, Linux, Shell Scripting, Python, Docker, Terraform, Jenkins and Computer Networking. They have strong ability to troubleshoot and resolve issues and are consistently motivated to expand their knowledge and skills through expantion of new technologies.