Understanding Kubernetes Node Affinity Explained

Kubernetes Node Affinity

Node Affinity lets a pod express preferences or requirements about which nodes it should be scheduled on, based on node labels. It’s the pod saying “I want to run on nodes that look like this.”

Node Affinity vs. nodeSelector

nodeSelector is the older, simpler way to pin pods to nodes — just a flat key/value match. Node Affinity is its more expressive replacement, supporting operators like In, NotIn, Gt, Lt, Exists, etc.


The two types of Node Affinity

1. requiredDuringSchedulingIgnoredDuringExecution Hard rule — the pod will not be scheduled unless a matching node exists. Think of it as a mandatory constraint.

2. preferredDuringSchedulingIgnoredDuringExecution Soft rule — the scheduler tries to place the pod on a matching node, but falls back to any node if none match. You assign a weight (1–100) to express how strongly you prefer it.

The IgnoredDuringExecution part means: if a node’s labels change after a pod is already running there, the pod won’t be evicted. (A future RequiredDuringExecution type is planned to handle this.)


Structure
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
- us-east-1b
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 80
preference:
matchExpressions:
- key: node-type
operator: In
values:
- high-memory
- weight: 20
preference:
matchExpressions:
- key: disk-type
operator: In
values:
- ssd

Available operators
OperatorMeaning
InLabel value is in the list
NotInLabel value is not in the list
ExistsLabel key is present (any value)
DoesNotExistLabel key is absent
GtLabel value is greater than (numeric)
LtLabel value is less than (numeric)

nodeSelectorTerms vs. matchExpressions logic

This is a common point of confusion:

  • Multiple nodeSelectorTerms are OR’d — the pod can match any one of them
  • Multiple matchExpressions within a term are AND’d — all must be satisfied
nodeSelectorTerms:
- matchExpressions: # Term 1
- key: zone
operator: In
values: [us-east-1a] # Must be in us-east-1a
- key: disk
operator: In
values: [ssd] # AND must have ssd
- matchExpressions: # Term 2 (OR)
- key: zone
operator: In
values: [us-west-2a] # OR just be in us-west-2a

Common use cases

Zone/region pinning — Ensure a pod runs in a specific availability zone for latency or compliance reasons.

Hardware requirements — Schedule ML training jobs only on nodes labeled gpu=true or accelerator=nvidia.

Tiered node pools — Prefer expensive high-memory nodes for a workload, but fall back to standard nodes if unavailable (use preferred with a high weight).

Topology spread — Combined with topologySpreadConstraints, affinity helps distribute pods evenly across zones or racks.


How Taints/Tolerations and Node Affinity work together
MechanismDriven byStyle
Taints + TolerationsNode repels podsExclusion / opt-in
Node AffinityPod seeks nodesAttraction / preference

A typical pattern is to use both:

  1. Taint the node so random pods don’t land on it
  2. Use Node Affinity on the right pods to actively attract them to it

This gives you precise two-way control over pod placement.

Leave a Reply