Day 34: Working with Services in Kubernetes

What are Services in K8s

In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients.

In Kubernetes, Services act as an abstraction layer that provides a stable network identity to Pods and hides the complexities of Pod IP addresses. Essentially, Services allow Pods to communicate with each other and with external clients.

Here's a breakdown of what Services do in Kubernetes:

  1. Stable Network Identities: Services assign a stable network identity, known as a service IP address, to a group of Pods. This service IP address remains consistent even if the underlying Pods are scaled up, moved to different nodes, or restarted.

  2. Traffic Distribution: Services facilitate communication by allowing other Pods, Services, or external clients to send traffic to the Pods they manage. They act as load balancers, distributing incoming traffic across the available Pods behind the Service.

  3. Abstraction of Pod IPs: Services abstract away the individual IP addresses of Pods. Instead of directly addressing individual Pods, clients can interact with the Service IP address, which dynamically routes traffic to the appropriate Pods based on the configured Service type and selector.

  4. Connectivity within the Cluster: Services enable seamless communication between different components within the Kubernetes cluster. This includes communication between Pods within the same application or across different applications, regardless of their physical location in the cluster.

Overall, Services play a crucial role in Kubernetes networking, providing a unified and reliable way for Pods to communicate with each other and with external entities. They abstract away the complexities of network configuration, making it easier to manage and scale applications in Kubernetes environments.

Introduction: Kubernetes Services simplify communication between components in a cluster by providing a stable endpoint for applications. They streamline networking complexities and ensure seamless connectivity.

Task-1: Creating a Service for todo-app Deployment

Step 1: Service Definition Create a Service definition for your todo-app Deployment. Open a new file named service.yml and add the following configuration:

apiVersion: v1
kind: Service
metadata:
  name: todo-app-service
spec:
  selector:
    app: todo-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  type: NodePort

Step 2: Applying the Service Definition

Apply the Service definition to your Kubernetes cluster using the command:

kubectl apply -f service.yml -n <namespace-name>

Step 3: Verifying the Service Verify that the Service is working by accessing the todo-app using the Service's IP and Port within your Namespace:

kubectl get svc -n <namespace-name>

Task-2: Creating a ClusterIP Service for Internal Access

Understanding ClusterIP Service: ClusterIP exposes the Service on a cluster-internal IP, ideal for internal communication between Pods.

Step 1: ClusterIP Service Definition

Create a ClusterIP Service definition for your todo-app Deployment. In a new file named cluster-ip-service.yml, add the following configuration:

apiVersion: v1
kind: Service
metadata:
  name: todo-app-cluster-ip-service
spec:
  selector:
    app: todo-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  type: ClusterIP

Step 2: Applying the ClusterIP Service Definition

Apply the ClusterIP Service definition to your Kubernetes cluster using the command:

kubectl apply -f cluster-ip-service.yml -n <namespace-name>

Step 3: Verifying the ClusterIP Service

Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster within your Namespace:

kubectl get svc -n <namespace-name>

Task-3: Creating a LoadBalancer Service for External Access

Understanding LoadBalancer Service: LoadBalancer exposes the Service externally using a cloud provider's load balancer, ideal for scenarios requiring external access.

Step 1: LoadBalancer Service Definition

Create a LoadBalancer Service definition for your todo-app Deployment. In a new file named load-balancer-service.yml, add the following configuration:

apiVersion: v1
kind: Service
metadata:
  name: todo-app-load-balancer-service
spec:
  selector:
    app: todo-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  type: LoadBalancer

Step 2: Applying the LoadBalancer Service Definition

Apply the LoadBalancer Service definition to your Kubernetes cluster using the command:

kubectl apply -f load-balancer-service.yml -n <namespace-name>

Step 3: Verifying the LoadBalancer Service

Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster within your Namespace:

kubectl get svc -n <namespace-name>

Conclusion:

Kubernetes Services are vital for orchestrating connectivity and accessibility within clusters. Mastering the creation and configuration of Services enhances your ability to design resilient and scalable applications. Stay tuned for more Kubernetes adventures as we continue our #90daysofDevOps journey! ๐Ÿš€๐Ÿ‘ฉโ€๐Ÿ’ป #DevOps #Kubernetes #Services #Deployment #AccessManagement

0
Subscribe to my newsletter

Read articles from Yashraj Singh Sisodiya directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Yashraj Singh Sisodiya
Yashraj Singh Sisodiya

I am Yashraj Singh Sisodiya, a 3rd Year CSE student at SVVV, born and raised in Shujalpur. Currently residing in Indore, I'm passionate about pursuing a career in DevOps engineering. My tech journey began with an internship at Infobyte, honing my skills as an Android Developer intern. Alongside my academic pursuits, I actively participate in co-curriculars, holding roles as Technical Lead at Abhyudaya and Cloud Lead at GDSC SVVV, while also serving as an MLSA of my college. I have a keen interest in Cloud Computing, demonstrated through projects such as User management and Backup using shell scripting Linux, Dockerizing applications, CI/CD with Jenkins, and deploying a 3-tier application on AWS. Always eager to learn, I'm committed to expanding my knowledge and skills in the ever-evolving tech landscape.