Kustomize

Kustomize is a k8s native Configuration management tool that helps in managing k8s applications and deployment across diff environments seamlessly. The main features are template free approach and improved collaboration, and enhanced code reusability.

πŸ’ How is it different from Helm charts?

Kustomize manages base configurations and customizes them for different environments using overlays and patches. It’s template-free and fully declarative, making it ideal for teams that want a consistent, centralized base config with flexible environment-specific tweaks, like adjusting replica counts or adding labels. This approach minimizes duplication and keeps customizations clear and organized.
Example: A central DevOps team maintains a set of base configurations for various microservices or components. Teams in different environments (like development, staging, production) can then create their own overlays to adjust values like replicas, resource limits, or environment variables, without changing the original base configurations.

Helm acts like a template engine to create reusable charts. Charts are nothing but packaged applications(bundle of yaml files) that can be reused across different environments with the help of values.yaml files without modifying the base chart. Helm is great when you need to handle complex deployments that require versioning, managing dependencies (like services relying on databases), or the ability to roll back to previous versions.
Example: For microservices architecture, you might have a Helm chart for each service that gets reused across multiple environments. The chart can be configured using different values.yaml files (e.g: dev-values.yaml, prod-values.yaml) to modify things like replica counts, environment variables, and service configurations per environment. Helm handles versioning of releases, which is especially useful in a microservices setup.

Structure:

.
β”œβ”€β”€ base/                      # Base configuration
β”‚   β”œβ”€β”€ deployment.yaml       # Web application deployment
β”‚   β”œβ”€β”€ service.yaml         # Service definition
β”‚   β”œβ”€β”€ configmap.yaml       # Base configuration
β”‚   └── kustomization.yaml   # Base kustomization
└── overlays/                 # Environment-specific configs
    β”œβ”€β”€ development/         # Development environment
    β”‚   └── kustomization.yaml
    └── production/          # Production environment
        └── kustomization.yaml

The base/ directory contains the original, unchanged configuration files and a kustomization file that defines the resources for customization. The overlay/ directory contains environment-specific or application-specific customizations applied on top of the base.

Kustomize reads the base configuration and applies the changes specified in the overlay and generates final configuration.

There are three ways to apply patches in overlays:

  1. Built-in Transformers: Use in-built transformers for common tasks.

  2. Custom YAML Patches: Specify custom configurations in YAML format.

  3. JSON Patches: Use JSON format for precise/flexible modifications.

Reference these patches in the overlay’s kustomization.yaml

πŸš€ Real time Production level Use Cases

✴️ Scenario 1: Updating nginx versions across environments

dev-older version for stability, stage-intermediate for test, prod-latest secure version.

Step-1 : Base configuration - nginx deployment

# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Step-2: overlay for dev environment

# overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
images:
  - name: nginx
    newTag: 1.18.0
replicas:
  - name: nginx
    count: 2

Step-3: Applying overlays with kustomize (dev/stage/prod)

✴️ Scenario 2: Managing multi tenant deployment with kustomize

Running multiple instances of application for different customers, teams or business where each tenant needs different set of Namespace, Resource Limits, config(db endpoint, api keys, log levels) , no of replicas.

Step-1 : Base configuration - nginx deployment

# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Step-2: Tenant 1 Overlay (overlays/tenant1/customization.yaml)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: tenant1
namePrefix: tenant1-
replicas:
  - name: nginx
    count: 1

Step-3: Tenant 2 Overlay (overlays/tenant2/customization.yaml)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: tenant2
namePrefix: tenant2-
replicas:
  - name: nginx
    count: 2

Step-4: Applying Multi-Tenant Configurations

✴️ Scenario 3: Managing Deploy across multiple K8s clusters

Many organizations run workloads in multiple clusters for different env or locations.

  1. Dedicated cluster for dev,stage,prod

  2. Multi cloud - AWS,GCP,Azure ,on prem k8s

  3. DR cluster in case of failures.

  4. Regional cluster for compliance and performance.

Each cluster have different NS, SC, Ingress, different resource configurations.

Ensure you have 2 clusters running, in my case it’s two kind clusters.

Step-1 : Base configuration - nginx deployment

# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Step-2 : Dev Cluster Overlay (overlays/dev/kustomization.yaml)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: dev-kind-cluster-1
configMapGenerator:
  - name: app-config
    literals:
      - ENV=development
      - LOG_LEVEL=debug
patches:
- path: dev-patch.yaml

Dev Cluster Patch (overlays/dev/dev-patch.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2

Step-3 :Production Cluster Overlay (overlays/prod/kustomization.yaml)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: prod-kind-cluster-2
configMapGenerator:
  - name: app-config
    literals:
      - ENV=production
      - LOG_LEVEL=info
patches:
- path: prod-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3

Step-4: Applying Multi-Cluster Configurations with Kustomize

kubectl config use-context kind-kind-cluster-1
kubectl apply -k overlays/dev -n dev-kind-cluster-1

kubectl config use-context kind-kind-cluster-2
kubectl apply -k overlays/prod -n prod-kind-cluster-2

⁉️ Problem faced:

When you update a configmap attached to a pod as a volume, the configmap data automatically propagates to the pod. However, the pod does not get the latest data in the configmap unless we restart the pod because the pod is unaware of what got changed in configMap. Since the cm data(env var,proper,etc) used by applications during their startup.

Step-1: Base configuration - nginx deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        env:
        - name: ENVIRONMENT
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: ENV
        - name: LOGLEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: LOG_LEVEL

Step-2: Dev Overlay (overlays/dev/kustomization.yaml)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: dev-kind-cluster-1
configMapGenerator:
  - name: app-config
    literals:
      - ENV=development
      - LOG_LEVEL=warn
patches:
- path: dev-patch.yaml

dev-patch.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2

βœ… Solution:

Kustomize generator creates a configMap and Secret with a unique name(hash) at the end. For example, if the name of the configmap is app-config, the generated one would have the name app-config-74d8b68f89. Here 74d8b68f89 is the appended hash. update the configmap/Secret, it will create a new configMap/Secret with the same name with a different hash(dft5md27tt). The moment Deployment is updated by Kustomize, a rollout will be triggered and the application runs on the pod and gets the updated configmap/secret data as shown in the above picture. In this way, we don’t need to redeploy or restart the deployment.

0
Subscribe to my newsletter

Read articles from Gopi Vivek Manne directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gopi Vivek Manne
Gopi Vivek Manne

I'm Gopi Vivek Manne, a passionate DevOps Cloud Engineer with a strong focus on AWS cloud migrations. I have expertise in a range of technologies, including AWS, Linux, Jenkins, Bitbucket, GitHub Actions, Terraform, Docker, Kubernetes, Ansible, SonarQube, JUnit, AppScan, Prometheus, Grafana, Zabbix, and container orchestration. I'm constantly learning and exploring new ways to optimize and automate workflows, and I enjoy sharing my experiences and knowledge with others in the tech community. Follow me for insights, tips, and best practices on all things DevOps and cloud engineering!