Kubernetes Multi Container Pod patterns -Adapter Container Pattern

Navya ANavya A
6 min read

In the world of Kubernetes and container orchestration, flexibility and adaptability are key. Sometimes, your containerized applications need to communicate with external systems that speak different languages or protocols. This is where the Adapter Container pattern comes into play, providing an elegant solution to bridge the gap.

Understanding the Adapter Container Pattern

The_Adapter_Pattern_.jpeg

At its core, the Adapter Container pattern involves deploying an additional container within the same pod as your main application container. This secondary container acts as an intermediary, adapting or modifying data or communication between your main application and external systems. The main goal is to ensure seamless integration without burdening your main application with protocol translation, data format conversion, or encryption.

Let's dive into an example to illustrate the power of adapter containers in Kubernetes.

Benefits of the Adapter Container Pattern

The Adapter Container pattern offers several advantages:

  1. Modularity: Your main application stays focused on its core functionality while the adapter container takes care of data adaptation and encryption. This separation of concerns enhances code maintainability.

  2. Reusability: The adapter container can be reused across multiple pods or services that require similar data transformation or encryption. It promotes code reuse and consistency.

  3. Simplicity: Your main application doesn't need to deal with the complexities of data format conversion or TLS encryption. This simplifies the codebase and makes it more manageable.

  4. Security: TLS encryption is crucial for secure data transmission. By centralizing encryption within the adapter container, you ensure that data is always encrypted before leaving your application.

Best Practices for Adapter Containers

To make the most of the Adapter Container pattern, consider these best practices:

  1. Clear Responsibilities: Ensure that the adapter container has a well-defined responsibility. Avoid mixing unrelated functionality within the same container.

  2. Resource Allocation: Set appropriate resource requests and limits for both containers to prevent resource contention.

  3. Logging and Monitoring: Implement consistent logging and monitoring across both containers. Centralize logs and metrics to simplify troubleshooting.

  4. Container Images: Keep container images lightweight and up-to-date. Regularly update images to apply security patches.

The Adapter Container in Action:

In our Kubernetes pod, we'll set up two containers:

  1. Main Application Container (webserver): This is where runs an Nginx web server as its main application.

  2. Adapter Container (Adapter): The star of the show, responsible for collecting and exposing Nginx metrics.

Here's what the Kubernetes YAML might look like:

apiVersion: v1
kind: Pod
metadata:
  name: webserver-1
  labels:
    app: webserver
spec:
  volumes:
    - name: nginx-conf
      configMap:
        name: nginx-conf
        items:
          - key: default.conf
            path: default.conf
  containers:
    - name: webserver
      image: nginx
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: /etc/nginx/conf.d
          name: nginx-conf
          readOnly: true
    - name: adapter
      image: nginx/nginx-prometheus-exporter:0.4.2
      args: ["-nginx.scrape-uri","http://localhost/nginx_status"]
      ports:
        - containerPort: 9113

Here's a brief explanation of each section:

  1. Metadata:

    • name: The name of the Pod is set to "webserver-1".

    • labels: A label "app: webserver" is applied to the Pod for easy identification and grouping.

  2. Volumes:

    • A volume named nginx-conf is defined. It's associated with a ConfigMap named nginx-conf.

    • The ConfigMap provides a configuration file called default.conf, which will be mounted as default.conf within the Pod.

    • Here is the ConfigMap YAML:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: nginx-conf
        data:
          default.conf: |
            server {
              listen       80;
              server_name  localhost;
              location / {
                  root   /usr/share/nginx/html;
                  index  index.html index.htm;
              }
              error_page   500 502 503 504  /50x.html;
              location = /50x.html {
                  root   /usr/share/nginx/html;
              }
              location /nginx_status {
                stub_status;
                allow 127.0.0.1;  #only allow requests from localhost
                deny all;   #deny all other hosts
              }
            }
      
  3. Containers:

    • webserver Container:

      • name: The main application container is named "webserver".

      • image: It uses the official Nginx Docker image.

      • ports: It exposes port 80 for serving web traffic.

      • volumeMounts: It mounts the nginx-conf volume to the path /etc/nginx/conf.d with read-only access. This is where Nginx expects its configuration files.

    • adapter Container:

      • name: The second container is named "adapter".

      • image: It uses the "nginx/nginx-prometheus-exporter" image with version 0.4.2.

      • args: It specifies arguments for the container. Here, it configures the Prometheus exporter to scrape Nginx metrics from "http://localhost/nginx_status".

      • ports: It exposes port 9113 for serving Prometheus metrics.

In summary, the Pod definition contains two containers; the nginx container, which acts as the application container, and the adapter container. The adapter container uses the nginx/nginx-prometheus-exporter which does the magic of transforming the metrics that Nginx exposes on /nginx_status following the Prometheus format. If you’re interested in seeing the difference between both metrics, do the following:

kubectl exec -it webserver bash
root@webserver:/# apt update && apt install curl -y
Defaulting container name to webserver.
Use 'kubectl describe pod/webserver -n default' to see all of the containers in this pod.
root@webserver:/# curl localhost/nginx_status
Active connections: 1
server accepts handled requests
 3 3 3
Reading: 0 Writing: 1 Waiting: 0
root@webserver:/# curl localhost:9313/metrics
curl: (7) Failed to connect to localhost port 9313: Connection refused
root@webserver:/# curl localhost:9113/metrics
# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 4
# HELP nginx_connections_active Active client connections
# TYPE nginx_connections_active gauge
nginx_connections_active 1
# HELP nginx_connections_handled Handled client connections
# TYPE nginx_connections_handled counter
nginx_connections_handled 4
# HELP nginx_connections_reading Connections where NGINX is reading the request header
# TYPE nginx_connections_reading gauge
nginx_connections_reading 0
# HELP nginx_connections_waiting Idle client connections
# TYPE nginx_connections_waiting gauge
nginx_connections_waiting 0
# HELP nginx_connections_writing Connections where NGINX is writing the response back to the client
# TYPE nginx_connections_writing gauge
nginx_connections_writing 1
# HELP nginx_http_requests_total Total http requests
# TYPE nginx_http_requests_total counter
nginx_http_requests_total 4
# HELP nginx_up Status of the last metric scrape
# TYPE nginx_up gauge
nginx_up 1
# HELP nginxexporter_build_info Exporter build information
# TYPE nginxexporter_build_info gauge
nginxexporter_build_info{gitCommit="f017367",version="0.4.2"} 1

So, we logged into the webserver Pod, installed curl to be able to establish HTTP requests, and examined the /nginx_status endpoint and the exporter’s one (located under: 9113/metrics). Notice that in both requests, we used localhost as the server address. That’s because both containers are running in the same Pod and using the same loopback address

  1. Here is the service YAML file:

     apiVersion: v1
     kind: Service
     metadata:
       name: myapp-nodeport-service
       labels:
         app: myapp-service-httpd
         tier: frontend
     spec:
       type: NodePort
       ports:
         - targetPort: 80
           port: 80
           nodePort: 30008
           name: nginx
         - targetPort: 9113
           port: 9113
           nodePort: 30009
           name: adaptor
       selector:
         app: webserver
    

To summarize, this Service (myapp-nodeport-service) is of type "NodePort," and it exposes two ports on each node in the Kubernetes cluster:

  • Port 80 (named "nginx") is used to route HTTP traffic to pods with the label "app: webserver," which typically runs an Nginx web server on port 80.

  • Port 9113 (named "adaptor") is used for serving Prometheus metrics from pods with the label "app: webserver." This is commonly used for monitoring and collecting metrics from an application.

NodePort Services are often used when you need to expose your application to the external world or when you want to access your services externally from outside the cluster, typically for development or testing purposes. In a production environment, you might use an Ingress controller or LoadBalancer Service for external access to your services.

Conclusion

The Adapter Container pattern is a valuable tool in the Kubernetes toolkit, simplifying complex integration tasks and promoting modular, maintainable code. By offloading data transformation and encryption to an adapter container, you empower your main application to focus on what it does best—delivering value to your users.

As you navigate the Kubernetes ecosystem, remember that adaptability is key to success. With the Adapter Container pattern, you can seamlessly bridge the gap between your application and external systems, ensuring smooth and secure communication.

So, the next time you find yourself facing integration challenges in your containerized applications, think of the Adapter Container pattern as your trusty ally.

Happy containerizing!

0
Subscribe to my newsletter

Read articles from Navya A directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Navya A
Navya A

👋 Welcome to my Hashnode profile! I'm a passionate technologist with expertise in AWS, DevOps, Kubernetes, Terraform, Datree, and various cloud technologies. Here's a glimpse into what I bring to the table: 🌟 Cloud Aficionado: I thrive in the world of cloud technologies, particularly AWS. From architecting scalable infrastructure to optimizing cost efficiency, I love diving deep into the AWS ecosystem and crafting robust solutions. 🚀 DevOps Champion: As a DevOps enthusiast, I embrace the culture of collaboration and continuous improvement. I specialize in streamlining development workflows, implementing CI/CD pipelines, and automating infrastructure deployment using modern tools like Kubernetes. ⛵ Kubernetes Navigator: Navigating the seas of containerization is my forte. With a solid grasp on Kubernetes, I orchestrate containerized applications, manage deployments, and ensure seamless scalability while maximizing resource utilization. 🏗️ Terraform Magician: Building infrastructure as code is where I excel. With Terraform, I conjure up infrastructure blueprints, define infrastructure-as-code, and provision resources across multiple cloud platforms, ensuring consistent and reproducible deployments. 🌳 Datree Guardian: In my quest for secure and compliant code, I leverage Datree to enforce best practices and prevent misconfigurations. I'm passionate about maintaining code quality, security, and reliability in every project I undertake. 🌐 Cloud Explorer: The ever-evolving cloud landscape fascinates me, and I'm constantly exploring new technologies and trends. From serverless architectures to big data analytics, I'm eager to stay ahead of the curve and help you harness the full potential of the cloud. Whether you need assistance in designing scalable architectures, optimizing your infrastructure, or enhancing your DevOps practices, I'm here to collaborate and share my knowledge. Let's embark on a journey together, where we leverage cutting-edge technologies to build robust and efficient solutions in the cloud! 🚀💻