Week 13 - The CNI Spec and Pod Networking¶
13.1 Conceptual Core¶
- The CNI (Container Network Interface) spec is small (~30 pages). A CNI plugin is a binary that the kubelet (via the runtime) invokes when a Pod sandbox is created. Inputs: container ID, network namespace path, JSON config. Outputs: assigned IP, routes.
- Kubelet does not know about networking beyond "ask the CNI." This is what makes the dataplane pluggable.
- Modern CNIs ship as DaemonSets that program kernel rules (iptables, OVS, eBPF) and run a small "agent" plus a thin "delegator" CNI binary.
13.2 Mechanical Detail¶
- Read
containernetworking/cni/SPEC.md. Operations:ADD,DEL,CHECK,VERSION. - The CNI binary must be at
/opt/cni/bin/<name>; config at/etc/cni/net.d/*.conf. - The kubelet → CRI → CNI flow:
- Kubelet asks runtime to create a sandbox.
- Runtime creates a netns; calls CNI
ADD. - CNI assigns IP, sets up the netns.
- Sandbox containers join via
CLONE_NEWNS=falseplus the existing netns. - CNI chains: multiple plugins composed, each running in order (e.g., a primary CNI + a metering plugin + a port-mapping plugin).
13.3 Lab-"Read a CNI's Source"¶
- Pick a simple CNI (
flannelor the referencebridgeplugin fromcontainernetworking/plugins). Read itscmdAddend to end. - Deploy a small kind cluster; trace a Pod creation in the kubelet log; correlate with the CNI binary invocation.
- Use
nsenter -t <pause-pid> -n ip ato inspect the container's network namespace from the host.
13.4 Hardening Drill¶
- Default-deny
NetworkPolicyper namespace. Allow only intended Pod-to-Pod and Pod-to-Service traffic.
13.5 Operations Slice¶
- Monitor CNI errors in kubelet logs. A node with consistent CNI ADD failures will have stuck pending Pods-alert on this.