Kubernetes Node Affinity
Node Affinity lets a pod express preferences or requirements about which nodes it should be scheduled on, based on node labels. It’s the pod saying “I want to run on nodes that look like this.”
Node Affinity vs. nodeSelector
nodeSelector is the older, simpler way to pin pods to nodes — just a flat key/value match. Node Affinity is its more expressive replacement, supporting operators like In, NotIn, Gt, Lt, Exists, etc.
The two types of Node Affinity
1. requiredDuringSchedulingIgnoredDuringExecution Hard rule — the pod will not be scheduled unless a matching node exists. Think of it as a mandatory constraint.
2. preferredDuringSchedulingIgnoredDuringExecution Soft rule — the scheduler tries to place the pod on a matching node, but falls back to any node if none match. You assign a weight (1–100) to express how strongly you prefer it.
The
IgnoredDuringExecutionpart means: if a node’s labels change after a pod is already running there, the pod won’t be evicted. (A futureRequiredDuringExecutiontype is planned to handle this.)
Structure
spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - us-east-1a - us-east-1b preferredDuringSchedulingIgnoredDuringExecution: - weight: 80 preference: matchExpressions: - key: node-type operator: In values: - high-memory - weight: 20 preference: matchExpressions: - key: disk-type operator: In values: - ssd
Available operators
| Operator | Meaning |
|---|---|
In | Label value is in the list |
NotIn | Label value is not in the list |
Exists | Label key is present (any value) |
DoesNotExist | Label key is absent |
Gt | Label value is greater than (numeric) |
Lt | Label value is less than (numeric) |
nodeSelectorTerms vs. matchExpressions logic
This is a common point of confusion:
- Multiple
nodeSelectorTermsare OR’d — the pod can match any one of them - Multiple
matchExpressionswithin a term are AND’d — all must be satisfied
nodeSelectorTerms: - matchExpressions: # Term 1 - key: zone operator: In values: [us-east-1a] # Must be in us-east-1a - key: disk operator: In values: [ssd] # AND must have ssd - matchExpressions: # Term 2 (OR) - key: zone operator: In values: [us-west-2a] # OR just be in us-west-2a
Common use cases
Zone/region pinning — Ensure a pod runs in a specific availability zone for latency or compliance reasons.
Hardware requirements — Schedule ML training jobs only on nodes labeled gpu=true or accelerator=nvidia.
Tiered node pools — Prefer expensive high-memory nodes for a workload, but fall back to standard nodes if unavailable (use preferred with a high weight).
Topology spread — Combined with topologySpreadConstraints, affinity helps distribute pods evenly across zones or racks.
How Taints/Tolerations and Node Affinity work together
| Mechanism | Driven by | Style |
|---|---|---|
| Taints + Tolerations | Node repels pods | Exclusion / opt-in |
| Node Affinity | Pod seeks nodes | Attraction / preference |
A typical pattern is to use both:
- Taint the node so random pods don’t land on it
- Use Node Affinity on the right pods to actively attract them to it
This gives you precise two-way control over pod placement.