Mastering Node Placement in Kubernetes: Node Selectors vs. Node Affinity

Rahul BansodRahul Bansod
4 min read

In Kubernetes, controlling where your Pods are scheduled can be essential for managing workloads, ensuring resource optimization, and maintaining availability. Kubernetes provides two powerful mechanisms for this: Node Selectors and Node Affinity. While they seem similar, each has distinct characteristics and use cases. In this post, we’ll dive deep into how they work, their differences, and practical examples to help you choose the right tool for your scheduling needs.

Node Selectors: Simple but Limited Placement Control

Node Selectors provide a straightforward way to control Pod scheduling by specifying a requirement for a Pod to be scheduled on nodes with specific labels. This approach is particularly useful when you want to guarantee that a Pod will run on a particular type of node, but it does have limitations.

How Node Selectors Work

A Node Selector is a simple, binary filter: if a node has the required label, the Pod can be scheduled there; otherwise, it cannot. However, it lacks the ability to express more complex logical expressions or soft preferences. For example, it can't handle statements like “choose nodes where foo=A or foo=B” or "prefer nodes with bar=C but allow others if necessary."

Example: Node Selector in Action

Let’s say we have two types of nodes in our Kubernetes cluster:

  • High CPU Nodes: Labeled with cpu=high

  • Regular Nodes: Labeled with cpu=standard

We want a specific workload to always run on high-CPU nodes. We can use a Node Selector in the Pod spec to enforce this:

yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: high-cpu-app
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: high-cpu-app
    spec:
      nodeSelector:
        cpu: high
      containers:
      - name: app
        image: myapp:latest
        resources:
          requests:
            cpu: "500m"

In this example, the Node Selector ensures that Pods are only scheduled on nodes labeled with cpu=high. While simple, this configuration lacks flexibility; if all high-CPU nodes are unavailable, the Pod will remain unscheduled rather than defaulting to another node type.

When to Use Node Selectors

Node Selectors are ideal for:

  • Simple Requirements: When your workload needs a particular resource or hardware feature.

  • Guaranteed Placement: When you have specific nodes for specific workloads, and flexibility isn’t required.

Node Affinity: Flexibility and Advanced Scheduling

Starting with Kubernetes 1.2, Node Affinity was introduced to provide more sophisticated scheduling controls. Node Affinity builds on Node Selectors by allowing more complex expressions and soft rules that can represent preferences rather than strict requirements.

Types of Node Affinity

Node Affinity introduces two types of rules:

  • RequiredDuringSchedulingIgnoredDuringExecution: Enforces hard requirements similar to Node Selectors, but with additional flexibility to express complex conditions.

  • PreferredDuringSchedulingIgnoredDuringExecution: Allows soft requirements, letting Kubernetes try to schedule the Pod on preferred nodes, but still allowing it to fall back on other nodes if necessary.

Example: Node Affinity in Action

Consider a scenario where we want our Pods to prefer high-CPU nodes if they are available, but we also want them to be able to run on standard nodes if high-CPU nodes aren’t available.

yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: flexible-cpu-app
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: flexible-cpu-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: environment
                operator: In
                values:
                - production
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                - key: cpu
                  operator: In
                  values:
                  - high
      containers:
      - name: app
        image: myapp:latest
        resources:
          requests:
            cpu: "500m"

In this example, Node Affinity lets us express both required and preferred node labels:

  • Required Node Affinity (requiredDuringSchedulingIgnoredDuringExecution): Pods can only run on nodes labeled as environment=production, which may reflect a more controlled or secure environment.

  • Preferred Node Affinity (preferredDuringSchedulingIgnoredDuringExecution): Pods will prefer nodes with cpu=high if they’re available, but if they’re not, the Pods can still run on other production nodes with standard CPUs.

Node Affinity vs. Node Selector

FeatureNode SelectorNode Affinity
Complex ExpressionsNoYes
Soft PreferencesNoYes (with preferredDuringScheduling)
AntiaffinityNoYes (using podAntiAffinity)
Best forBasic scheduling requirementsAdvanced scheduling and multi-level policies

Node Affinity for Real-World Use Cases

Node Affinity is ideal when:

  1. Workloads Have Strong Preferences: For example, analytics jobs that benefit from high-CPU nodes, but can run on standard nodes if necessary.

  2. Multi-Environment Clusters: Production vs. non-production environments, where production Pods are restricted to production-labeled nodes.

  3. Flexible Scaling Policies: When workloads can fall back to other node types in case preferred nodes are fully occupied.

Conclusion

Both Node Selectors and Node Affinity play a crucial role in Pod placement and resource optimization in Kubernetes. Node Selectors are best for simple requirements when strict control is enough. Node Affinity, on the other hand, offers the flexibility and complexity to handle nuanced scheduling needs, allowing you to prioritize resources while maintaining availability.

Choosing between Node Selectors and Node Affinity depends on the complexity of your scheduling requirements and how much control you need over Pod placement.

0
Subscribe to my newsletter

Read articles from Rahul Bansod directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rahul Bansod
Rahul Bansod

Kubernetes Consultant and DevOps Enthusiast, passionate about simplifying cloud-native technologies for developers and businesses. With a focus on Kubernetes, I dive deep into topics like API server processing, authentication, RBAC, and container orchestration. Sharing insights, best practices, and real-world examples to empower teams in building scalable, resilient infrastructure. Let's unlock the full potential of cloud-native together! Let me know if you'd like any adjustments!