Kubernetes For Noobs ☸️

Aditya DikeAditya Dike
7 min read

Hey folks, since you've clicked on this post, I assume that you are a beginner and you want to understand what Kubernetes is and why it is so popular. We are going to explore a lot of things about Kubernetes, so just read the whole blog post, and you will get something useful out of it.

Note: Before diving into Kubernetes, you should understand containers, such as why they exist and how they are used to bundle applications. Therefore, a general understanding of containers is required.

why Kubernetes (K8s)?

A container does make deploying applications easier by packaging them for distribution. However, simply packing an application into a container and sending it to users is not enough. Containers can fail during runtime or run out of resources, resulting in downtime where the application becomes inaccessible to users. As an engineer, you should take steps to minimize the risk of downtime.

So, in order to avoid issues, we can take several measures, but one of the most popular approaches is to increase the instances of your application. By scaling up the application instances, we can seamlessly shift the load to another container whenever one dies. Therefore, we need to create replicas of existing containers. Additionally, we must handle containers that have failed. Doing all of these tasks manually is not efficient. This is where Kubernetes comes into play.

Kubernetes is a tool that simplifies and accelerates all of these processes. If you want to scale your application, Kubernetes makes it easy. If you need to manage and distribute the load, you can utilize the load balancer provided by Kubernetes, and the list goes on.

Kubernetes is a platform for managing containerized applications. It does this by providing APIs that allow you to control and manage how your containerized applications are deployed, scaled, and organized. Kubernetes can be used on-premises or in the cloud. Initially designed as a container orchestration engine, Kubernetes has evolved to become much more than that.

Kubernetes is sometimes shortened to K8s with the 8 standing for the number of letters between the “K” and the “s”.

Let's look at some commonly used components of Kubernetes. This will enhance your understanding of Kubernetes.

Node & Pod 🎯

In Kubernetes, a node can be either a virtual machine or a physical server. It depends on how the cluster is set up. Node has some other components inside it, we will look at each of those but first, let's look at a Pod.

A pod is the smallest unit in Kubernetes. It holds your containers. Usually, one pod has one container. But pods can include multiple containers if needed. A pod is an abstraction of a container. It lets you work with Kubernetes instead of containers directly. Each pod is assigned an IP address to facilitate communication among them.

pods are ephemeral, which means they can die or go down.

Services & Ingress 🎯

Since pods are ephemeral and can go down, when a pod dies, Kubernetes automatically creates a new pod to replace it. When a new pod is created, it receives a new IP address. This means that if your application was communicating with the old pod's IP address, communication will break after the pod is restarted.

To solve this issue, Kubernetes has a concept of Services. A Service provides a permanent IP address and a name that can route traffic to your pods. Even if the pods die and new ones are created, the Service IP address and name remain the same. Services act as an abstraction layer over one or more pods, allowing your application to communicate with the Service instead of communicating directly with the pods.

There are two kinds of services in Kubernetes:

Internal service: Internal services are only accessible from within the Kubernetes cluster. They are used to expose applications to other applications running in the cluster.

External service: External services are accessible from outside the cluster. They expose applications to end users or other external systems.

For example, you may have an internal database service that stores data for your applications within the cluster. This is made as an internal service to restrict access only to applications within the cluster. A web application service that you want to expose to external users. This would be made as an external service. To expose an external service, you use an Ingress resource in Kubernetes. An Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. The Ingress controller running in the cluster (like Nginx or Traefik) will listen for Ingress resources and configure URLs to match and forward traffic to the backend services.

So in summary, Kubernetes Ingress resources allow you to expose external services running inside your Kubernetes cluster to external HTTP traffic.

Configmap & Scretes 🎯

ConfigMaps and Secrets are two Kubernetes resources used to store configuration data and secrets respectively. They allow you to separate and manage configuration data outside of your application code and containers.

For example, you have your application service running on a node along with a database service. Your application is communicating with the database using a database URL. Now you want to change the name of the database service. Since the database URL is defined in the built application image, you will need to rebuild the image with the new name.

To avoid this type of configuration issue, you can use ConfigMaps. A ConfigMap allows you to decouple configuration artifacts from image build processes. You can store the database URL and other configuration details in a ConfigMap and mount the ConfigMap as a volume in the application pod.

For storing sensitive data like passwords, you should use Secrets instead of ConfigMaps. Secrets allow you to securely store and control access to sensitive information like passwords, OAuth tokens etc.

Volumes 🎯

Kubernetes does provide mechanisms for managing data persistence. When you say "Kubernetes doesn't manage any data persistence", that is incorrect.

Kubernetes achieves data persistence through the use of volumes. Volumes provide a way to attach storage to your pods, allowing you to persist data across pod restarts and rescheduling. This allows your applications, like databases, to have a permanent place to store data while taking advantage of Kubernetes' pod management.

When your pod is scheduled, the volume is attached and the data is available inside the pod. If the pod is rescheduled, the same volume is attached to the new pod instance with all the data intact.

There are two main types of volumes in Kubernetes:

  • Ephemeral volumes - These volumes have the same lifetime as the pod. They are created when the pod is created and destroyed when the pod is deleted.

  • Persistent volumes - These volumes persist beyond the lifetime of a pod. They are managed independently from the pods that use them.

Deployments & Statefulsets 🎯

Deployment in Kubernetes ensures that your application is highly available and can self-heal by automatically restarting containers if they fail. A Deployment controls ReplicaSets which in turn manage Pods.

For example, if you have an application with two services - a web server and a database - a Deployment allows you to specify the number of replicas of the web server pods you want to run. If one of the web server pods fails or crashes, the Deployment will automatically create a new pod to replace it and maintain the specified number of replicas.

This ensures that your application is available, even if some of the pods die. The Deployment acts as a blueprint for creating and updating pods, allowing you to roll out new versions of your application in a controlled fashion. When you update a Deployment, it will kill the existing pods and create new ones with the new configuration. This allows you to deploy new versions of your application without any downtime.

The database service, on the other hand, is typically deployed separately using a StatefulSet. Since the database needs a stable network identity and storage, a StatefulSet is better suited for managing database replicas.

In summary, a Deployment allows you to declare the number of pods you want to run for a particular application and ensures that the correct number of pods is always up and running, even if some pods fail or are taken down for maintenance. This provides high availability and automatic self-healing capabilities to your Kubernetes applications.

That was a brief introduction to Kubernetes, where we discussed why Kubernetes exists and explored its fundamentals. I hope you've gained some knowledge from reading this blog. I am planning to create a series on Kubernetes, as it is a crucial topic. In the next blog post, I will be explaining the architecture of Kubernetes. Please stay tuned for that and make sure to subscribe for further updates.

If you're still reading this, like and comment would be greatly appreciated. You can also follow me on Hashnode and subscribe to my blog.

20
Subscribe to my newsletter

Read articles from Aditya Dike directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aditya Dike
Aditya Dike

Aditya is passionate about DevOps and Java development, especially within the development of web applications and the integration of DevOps tools. he likes to contribute to the Open Source and engaging in the Cloud-native ecosystem.