Deploy Once, Run Everywhere: Exploring DaemonSets in K8s


Definition:
A DaemonSet in Kubernetes ensures that a copy of a specific Pod is running on every (or selected) node in a cluster. It is commonly used to deploy cluster-level services that need to run on all nodes or specific ones, ensuring consistency across the Kubernetes environment.
Challenge
As we all know that, Monitoring part is very essential in todays software world 🌍, So we must create logging and monitoring pods on every node of our cluster. Based on that logging info we can easily monitor our cluster and applications. If we create manually then it will create a mess. Because We always scale-in and scale-out the nodes based on requirements. So if we increase the no of nodes (Ex: 3 to 5), its hard to create those monitoring pods every time. Due to the manual deployment there could be a chance of error-prone.
Solution: DaemonSet
A DaemonSet ensures that the logging or monitoring agent Pod is automatically deployed to every node in the cluster. If a new node is added, Kubernetes ensures the DaemonSet creates a Pod on that node. Similarly, when nodes are removed, the associated Pods are cleaned up.
Key Features of DaemonSet
Node Affinity: You can target specific nodes (e.g., only worker nodes or nodes with specific labels).
Rolling Updates: Supports updates to DaemonSet Pods without downtime.
Resource Optimization: Ensures lightweight Pods for essential services.
Command to create a DaemonSet:
kubectl create daemonset ds-name --image=image-name
Manifest file to create a DaemonSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myds
spec:
selector:
matchLabels:
app: monitor
template:
metadata:
labels:
app: monitor
spec:
containers:
- name: cont-1
image: fluentd:latest
ports:
- containerPort: 80
Apply the YAML File:
kubectl apply -f daemonset.yaml
NOTE: DaemonSet doesn’t contains replicas, because it will create only one pod on each node.
Real-Time Example: Deploying a Logging Agent Only on Specific Nodes Using DaemonSet
Scenario
You're managing a Kubernetes cluster in a hybrid cloud setup. Some of the nodes in your cluster are dedicated for testing environments (labeled as env=test
), while others are for production. To avoid resource overhead in production, you want to deploy a lightweight logging pods only on the nodes used for testing.
Solution: Use a DaemonSet with Node Affinity
This approach ensures that the logging agent runs only on the nodes labeled as env=test
, and as new test nodes are added, the agent will automatically be deployed there.
Step 1: Label the Test Nodes
kubectl label node test-node-1 env=test
kubectl label node test-node-2 env=test
Check the labels:
kubectl get nodes --show-labels
Step 2: Create a DaemonSet YAML
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-logging-agent
spec:
selector:
matchLabels:
app: log
template:
metadata:
labels:
app: log
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: In
values:
- test
containers:
- name: cont-1
image: fluent/fluentd:v1.14.6
Step 3: Apply the DaemonSet
Deploy the DaemonSet:
kubectl apply -f logging-pod.yaml
Verify the DaemonSet:
kubectl get daemonset
Check where the Pods are running:
kubectl get pods -o wide
This will confirm that the Fluentd Pods are running only on the nodes labeled as env=test
.
Business Impact
Cost Optimization: Resources in production nodes are not consumed unnecessarily.
Environment Segmentation: Logs from test environments are collected separately, simplifying analysis.
Automation: New test nodes automatically receive the logging agent without manual intervention.
This setup demonstrates how DaemonSets, combined with node affinity, streamline operations in a real-world Kubernetes environment.
Subscribe to my newsletter
Read articles from Shaik Mustafa directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
