Kubernetes Day 11 of 40daysofKubernetes: Exploring Multi Container Pods

In Kubernetes, a pod is the smallest deployable unit that can host one or more containers. Sometimes, a single container isn’t enough to fulfill an application’s needs. You might need multiple containers that work together as part of a single pod. This is where multi-container pods come into play.

Multi-container pods allow you to group containers that need to share resources or communicate with each other closely. For instance, you might have one container serving your application and another handling logging or data sidecar functionality.

This blog will walk you through understanding multi-container pods, the reasons to use them, and how to implement one in Kubernetes.

Why Multi-Container Pods?

Multi-container pods are useful in situations where containers need to:

  1. Share storage volumes: Both containers might need to access the same set of files.

  2. Share networking: Containers in the same pod can communicate using localhost (127.0.0.1), as they share the same network namespace.

  3. Coordinate behavior: You can run a secondary container (sidecar) that can act as a logging or proxy container, helping your main application container.

When to Use Multi-Container Pods?

  • Sidecar pattern: A secondary container complements the main application. E.g., logging, monitoring, or proxy containers.

  • Ambassador pattern: One container acts as a proxy, while another provides the core application logic.

  • Adapter pattern: A container translates or adapts the output of the main application.

Step-by-Step Guide: Creating a Multi-Container Pod in Kubernetes

Let's create a multi-container pod that has:

  1. A main application container running nginx to serve web traffic.

  2. A sidecar container that writes logs into a shared volume.

Step 1: Setting Up Your Kubernetes Environment

Ensure you have a Kubernetes cluster up and running. You can use Minikube, Kind, or any cloud-based Kubernetes provider like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Azure Kubernetes Service (AKS).

For this example, we assume you have kubectl configured and can access your cluster.

Step 2: Define a YAML File for Your Multi-Container Pod

Create a file called multi-container-pod.yaml with the following content:

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
    - name: nginx-container
      image: nginx:latest
      ports:
        - containerPort: 80
      volumeMounts:
        - name: shared-logs
          mountPath: /usr/share/nginx/html
    - name: sidecar-container
      image: busybox
      command: ['sh', '-c', 'echo "Logging output from sidecar container" > /shared/log.txt && sleep 3600']
      volumeMounts:
        - name: shared-logs
          mountPath: /shared
  volumes:
    - name: shared-logs
      emptyDir: {}

Explanation:

  • nginx-container: The main container running an Nginx web server. It uses a shared volume to access logs written by the sidecar container.

  • sidecar-container: A sidecar container based on the BusyBox image that writes logs to the shared volume. It will run a simple command to write to /shared/log.txt.

  • volumeMounts: Both containers share the volume shared-logs, with Nginx serving content from it and the sidecar writing to it.

Step 3: Deploy the Pod to Your Kubernetes Cluster

Run the following command to apply the YAML file and create the multi-container pod:

kubectl apply -f multi-container-pod.yaml

You should see output similar to:

pod/multi-container-pod created

Step 4: Verify Pod Status

Ensure the pod is running correctly by checking its status:

kubectl get pods

You should see the multi-container-pod in the Running state:

NAME                  READY   STATUS    RESTARTS   AGE
multi-container-pod   2/2     Running   0          1m

Notice the 2/2 under the READY column, which indicates that both containers inside the pod are running.

Step 5: Inspect the Containers

To verify both containers are functioning as expected, you can use the kubectl exec command to access either container inside the pod.

Check the Nginx container:

kubectl exec -it multi-container-pod -c nginx-container -- /bin/bash

Once inside, list the contents of the web server directory:

ls /usr/share/nginx/html

You should see log.txt written by the sidecar container.

Alternatively, check the sidecar container:

kubectl exec -it multi-container-pod -c sidecar-container -- cat /shared/log.txt

You should see the log content:

Logging output from sidecar container

Step 6: Expose the Pod

To access the Nginx service from outside the pod, you need to expose it. This can be done by creating a Kubernetes service.

Create a service YAML file called nginx-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: multi-container-pod
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: NodePort

Apply this YAML file:

kubectl apply -f nginx-service.yaml

Get the URL to access the Nginx service:

minikube service nginx-service --url

For cloud environments, you can use kubectl get svc and get the external IP of the service.

Step 7: Cleaning Up Resources

Once you’ve finished experimenting, you can delete the pod and service:

kubectl delete -f multi-container-pod.yaml
kubectl delete -f nginx-service.yaml

Conclusion

Multi-container pods allow you to create tightly coupled containers that share resources and work together. In this example, we implemented a sidecar pattern, where one container provides the core service (Nginx) and the other logs data to a shared volume.

Using multi-container pods enables Kubernetes to support more complex application architectures, giving you flexibility in designing your microservices.

Reference:

Video

0
Subscribe to my newsletter

Read articles from Rahul Vadakkiniyil directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rahul Vadakkiniyil
Rahul Vadakkiniyil