Kubernetes Learning Week Series 6
Kubernetes Learning Week Series 5
Kubernetes API Practical Guide
https://blog.kubesimplify.com/practical-guide-to-kubernetes-api
This article serves as a practical guide to understanding and using the Kubernetes API. It covers the fundamentals of the Kubernetes API, including its RESTful nature, methods for accessing the API, and the internal structure involving resources, API groups, and versions. Additionally, it provides a hands-on demonstration of how to use curl to list all running pods in a Kubernetes cluster, along with tips for further exploration.
Key Points
Kubernetes is an API-driven platform: Every operation involves API interactions.
Understanding the Kubernetes API allows for deeper control of the cluster, enabling automation, customization, and integration with other tools.
Kubernetes API is RESTful, adhering to principles like stateless communication, a uniform interface, and self-descriptive messages.
The kube-apiserver component exposes the Kubernetes API to users and components within the cluster.
There are multiple ways to access the Kubernetes API, including:
Via kubectl
Using REST calls with curl
Leveraging client libraries for various programming languages
The API structure revolves around resources (entities) and operations (actions), where resources are endpoints that can be independently manipulated.
Kubernetes API groups organize resource types for simplicity and extend the API’s capabilities. The main categories include the core group and named groups.
Each API group has independent versioning, progressing through phases like alpha, beta, and stable.
The "kind" field in Kubernetes manifests specifies the resource schema, which is crucial for serialization and deserialization in client-server communication.
A step-by-step demonstration shows how to use curl to list all running pods in a cluster, requiring authentication via CA certificate, client certificate, and client key.
Tips for further exploration:
Listing all resources and API versions
Using kubectl’s raw mode
Viewing the API calls made by kubectl
ETCD - Disaster Recovery Solution
https://medium.com/@alruwayti/k8s-etcd-dr-solution-bc8d9e43dadf
This article discusses disaster recovery (DR) solutions for Kubernetes (K8s) clusters, focusing on the ETCD component. It explains the challenges of managing ETCD in a stretched cluster across two regions and provides a solution to maintain high availability and consistency.
Key Points
Introduces the challenges of running a Kubernetes cluster across two availability zones for disaster recovery, with a focus on ETCD.
Explains the complexity of managing stretched Kubernetes clusters using tools like ArgoCD and Rancher.
Discusses quorum issues with ETCD when dividing master nodes across two regions.
Explains the concept of quorum and its importance in maintaining cluster availability within ETCD.
Describes a proposed solution involving automated scripts to handle ETCD downtime and recovery operations.
Provides detailed steps on managing the ETCD cluster during a zone outage and reintegrating it when the zone comes back online.
Kubernetes Gateway API - Lessons Learned from Practice
https://danoncoding.com/kubernetes-gateway-api-lessons-from-the-trenches-a1f6875d5d84
The Kubernetes Gateway API is an alternative to the classic Ingress, providing a standardized Kubernetes Service Mesh specification while separating external access management from routing definitions to individual workloads. This separation reduces maintenance bottlenecks and cross-team coordination, supports multiple protocols, and has shown positive results after a year of use, despite initial challenges.
Key Points
The Kubernetes Gateway API separates external access management from routing definitions, reducing friction and cross-team coordination.
It supports multiple protocols such as TCP, UDP, and gRPC, unlike Ingress, which only supports HTTP(s).
The Gateway resource specifies how to access the cluster, while the HTTPRoute resource defines routing to workloads.
Decoupling entry points and routing definitions is beneficial for dynamic workloads.
The author’s experience with the API has been positive, with its many advantages outweighing the drawbacks, which are being addressed rapidly.
Kubernetes Events - The Information Source of the Cluster
https://decisivedevops.com/kubernetes-events-news-feed-of-your-kubernetes-cluster-826e08892d7a/
This article focuses on Kubernetes events and explores how to effectively use kubectl events to monitor and troubleshoot cluster issues. Events are generated by various Kubernetes components such as the scheduler, kubelet, controllers, and others, to capture information related to pods, nodes, and other resources. Whether it’s scheduling a pod, container crashes, or a node running out of disk space, events provide a record of these states. Think of Kubernetes events as a chronicle of cluster activities, offering a centralized view of all significant actions related to various cluster resources.
Key Points
Kubernetes events are critical for monitoring and troubleshooting cluster issues, providing a centralized view of activities related to pods, nodes, and other resources.
The kubectl get events and kubectl events commands both retrieve events, but their default sorting methods (by occurrence time) differ.
Kubernetes v1.19 introduced the new events.k8s.io/v1 API version, offering more structured and expressive event information compared to the older v1 API.
By default, events are temporarily stored in etcd for one hour. This duration can be extended using the --event-ttl flag or external solutions.
Kubernetes aggregates repeated events, tracking metadata such as the first and last occurrence timestamps and the total count, instead of storing each occurrence individually.
Events have a time-to-live (TTL), which resets with each new occurrence. For frequently repeated events, this allows the same event to persist in the system longer.
Kubernetes event objects have a specific structure and fields, similar to other Kubernetes objects, and can be output in JSON format for detailed inspection.
Practical use cases of kubectl events include:
Listing recent events in a namespace.
Monitoring events for specific resources.
Filtering events by type.
Tracking security-related activities.
Creating Policies to Whitelist Image Registries in a K8s Cluster
https://medium.com/@alparslanuysal/whitelisting-image-registries-44150c86c4ac
Image registries are essential for storing and distributing container images, which are critical to the cluster environment. Ensuring these images come from trusted sources is vital for security. This article discusses the importance of using reliable public and private image registries (such as Docker Hub, Red Hat Catalog, and gcr.io) and emphasizes the need for strict organizational policies to control image sources. It also explains how to implement OPA Gatekeeper in Kubernetes to enforce these policies and ensure only trusted images are used.
Key Points
Image registries store and distribute container images, and their security is crucial for the cluster environment.
Base images should be downloaded from reliable sources to prevent malware attacks and enable quick vulnerability fixes.
Caution should be exercised when using Docker Hub, as it is a public registry where anyone can upload images.
Organizational policies should restrict container image usage to specific secure registries.
OPA Gatekeeper can be deployed in Kubernetes to enforce policies, ensuring images are downloaded only from trusted sources.
Example policies and test scenarios demonstrate how to whitelist specific registries (e.g., Docker Hub).
How to Significantly Reduce Prometheus Load and Cardinality by Using Only the Istio Labels You Need
This article provides guidance on reducing Prometheus load and cardinality by customizing Istio labels to include only those necessary for observability, emphasizing the importance of managing cardinality to prevent system overload.
Key Points
Managing cardinality in Prometheus is crucial for system performance and cost efficiency.
Istio’s metric labels can be customized using the tags_to_remove option in the istioOperator manifest.
Comprehensive testing in non-production environments is recommended to avoid unintended impacts on cluster behavior.
The article includes an example configuration for removing specific labels from metrics, reducing storage costs and improving Prometheus performance.
Observations demonstrate significant benefits, such as lower S3 storage costs and improved Prometheus efficiency.
The Istio Operator manages Istio resources on Kubernetes clusters, and updating its configuration may affect cluster behavior.
The article highlights the importance of validating changes and notes that Istio sidecar containers may need to be restarted for changes to take effect.
Subscribe to my newsletter
Read articles from Nan Song directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by