Saltar a contenido

Week 13 - The CNI Spec and Pod Networking

13.1 Conceptual Core

  • The CNI (Container Network Interface) spec is small (~30 pages). A CNI plugin is a binary that the kubelet (via the runtime) invokes when a Pod sandbox is created. Inputs: container ID, network namespace path, JSON config. Outputs: assigned IP, routes.
  • Kubelet does not know about networking beyond "ask the CNI." This is what makes the dataplane pluggable.
  • Modern CNIs ship as DaemonSets that program kernel rules (iptables, OVS, eBPF) and run a small "agent" plus a thin "delegator" CNI binary.

13.2 Mechanical Detail

  • Read containernetworking/cni/SPEC.md. Operations: ADD, DEL, CHECK, VERSION.
  • The CNI binary must be at /opt/cni/bin/<name>; config at /etc/cni/net.d/*.conf.
  • The kubelet → CRI → CNI flow:
  • Kubelet asks runtime to create a sandbox.
  • Runtime creates a netns; calls CNI ADD.
  • CNI assigns IP, sets up the netns.
  • Sandbox containers join via CLONE_NEWNS=false plus the existing netns.
  • CNI chains: multiple plugins composed, each running in order (e.g., a primary CNI + a metering plugin + a port-mapping plugin).

13.3 Lab-"Read a CNI's Source"

  1. Pick a simple CNI (flannel or the reference bridge plugin from containernetworking/plugins). Read its cmdAdd end to end.
  2. Deploy a small kind cluster; trace a Pod creation in the kubelet log; correlate with the CNI binary invocation.
  3. Use nsenter -t <pause-pid> -n ip a to inspect the container's network namespace from the host.

13.4 Hardening Drill

  • Default-deny NetworkPolicy per namespace. Allow only intended Pod-to-Pod and Pod-to-Service traffic.

13.5 Operations Slice

  • Monitor CNI errors in kubelet logs. A node with consistent CNI ADD failures will have stuck pending Pods-alert on this.

Comments