Skip to content

The CNI Handshake

Trace the sequence of events when a Pod is scheduled and the veth pair is physically created and attached.

To understand the exact sequence of events that wires a Pod into the cluster network, it is critical to first clarify an architectural misconception: the kubelet does not directly invoke the CNI plugin, nor does it communicate with it over a loopback interface.

Instead, Kubernetes relies on a strict delegation model. The kubelet communicates with the node's Container Runtime (like containerd or CRI-O) via the Container Runtime Interface (CRI). It is the Container Runtime that executes the Container Network Interface (CNI) plugins, which in turn configure the loopback interface and physically wire the veth pair.

Here is the precise, step-by-step architectural workflow of how a Pod's network is established from the moment the scheduler assigns it to a node.


1. Pod Admission and the Sandbox Request

After the Kubernetes scheduler assigns a Pod to a worker node, the kubelet on that node detects the assignment and begins the admission process. Once any required storage volumes are mounted, the kubelet issues a RunPodSandbox gRPC request to the container runtime via the CRI.

This request instructs the runtime to create the foundational environment for the Pod, known as the "Pod Sandbox."

2. Network Namespace Creation

Upon receiving the request, the container runtime provisions the sandbox. A core part of this sandbox is the creation of isolated Linux namespaces, most notably the network namespace.

The runtime typically starts a tiny infrastructure container (the pause container) whose sole purpose is to hold this newly created network namespace open for the lifespan of the Pod. At this exact moment, the network namespace exists, but it has no network interfaces and no IP address.

3. CNI Execution and the Loopback Interface

With the network namespace created, the container runtime must now configure the networking. The runtime parses the CNI configuration files (typically found in /etc/cni/net.d/) and executes the specified CNI plugin binaries (from /opt/cni/bin/).

Kubernetes strictly requires that every pod sandbox is provided with a loopback interface (lo). The container runtime achieves this by invoking a specific CNI loopback plugin.

  • The Execution: The container runtime executes the loopback CNI binary, passing it the process ID (PID) of the pause container (which represents the target network namespace).
  • The Action: The plugin enters the Pod's network namespace and brings up the local lo interface, allowing containers within the same Pod to communicate with each other over localhost.

4. Physical Wiring: The veth Pair Creation

Next, the container runtime invokes the primary CNI network plugin (such as Calico, Cilium, or the basic bridge plugin) to connect the Pod to the broader cluster network.

The primary CNI plugin performs the physical wiring using virtual ethernet (veth) cables:

  1. Creation: The CNI plugin asks the Linux kernel to generate a veth pair. A veth pair acts as a virtual wire; whatever enters one end immediately comes out the other.
  2. Host-Side Attachment: The plugin leaves one end of the veth pair in the host's root network namespace. It typically attaches this end to a virtual network bridge (like cni0 or an OVS bridge) that manages traffic routing on the node.
  3. Pod-Side Attachment: The plugin forcefully moves the other end of the veth pair directly into the Pod's isolated network namespace. It usually renames this interface to eth0 inside the Pod so that the application sees a standard networking adapter.

5. IPAM and Route Configuration

Once the veth pair is physically attached, the primary CNI plugin invokes an IP Address Management (IPAM) plugin (such as host-local) to allocate a unique IP address for the Pod from the node's exclusively assigned CIDR block.

  • The CNI plugin assigns this IP address to the eth0 interface inside the Pod's network namespace.
  • It configures the default routing rules inside the Pod to push all outbound traffic through the eth0 interface, across the veth pair, and out to the host's bridge.

6. Kubelet Resumes and Starts Containers

The CNI plugin completes its execution and returns a success response to the container runtime, which in turn returns a successful RunPodSandbox response to the kubelet.

The kubelet acknowledges the successful completion of the sandbox creation and network configuration by setting the Pod's PodReadyToStartContainers status condition to True.

Only at this point does the kubelet instruct the container runtime to start pulling the actual application container images and launching the workload containers. Because these application containers are instructed to join the already-configured network namespace of the sandbox, they immediately inherit the eth0 interface, the assigned IP address, and the lo loopback interface prepared by the CNI plugins.

Based on Kubernetes v1.35 (Timbernetes). Changelog.