Skip to content

How does the Container Runtime Interface interact with containerd specifically?

Following os the specific mechanical and configuration interactions between the Kubernetes kubelet and containerd via the Container Runtime Interface (CRI).

The interaction relies on a client-server model where the kubelet acts as the gRPC client and containerd acts as the gRPC server.

1. The Communication Channel: gRPC over Sockets

The kubelet does not interact with containerd directly through system calls or CLI commands. Instead, it connects to a specific endpoint exposed by containerd to send Protocol Buffer messages via gRPC.

  • Linux Endpoint: The default communication channel is a Unix domain socket located at unix:///run/containerd/containerd.sock.
  • Windows Endpoint: The default communication channel is a named pipe located at npipe://./pipe/containerd-containerd.

When the kubelet starts, it attempts to detect the runtime. If you are using containerd, you may need to explicitly configure the kubelet with --container-runtime-endpoint=unix:///run/containerd/containerd.sock to ensure it connects to the correct socket.

2. Internal Architecture: The cri Plugin

Crucially, containerd is not native to Kubernetes out of the box; it uses an internal plugin architecture. To satisfy the Kubernetes CRI requirements, containerd must have its CRI integration plugin enabled.

  • Configuration File: The interaction logic is defined in /etc/containerd/config.toml.
  • Enabling CRI: You must ensure that cri is not included in the disabled_plugins list within this configuration file. If cri is disabled, the kubelet cannot communicate with containerd.

3. Resource Management: Cgroup Driver Alignment

One of the most critical interaction points between the kubelet and containerd is the management of Linux control groups (cgroups) for resource isolation (CPU/Memory). A mismatch here is a frequent cause of node instability.

  • The Requirement: Both the kubelet and containerd must use the same cgroup driver.
  • Systemd Integration: On modern Linux distributions using systemd (and cgroup v2), the recommended driver is systemd.
  • Containerd Configuration: You must explicitly configure containerd to use the systemd driver by setting SystemdCgroup = true in the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] section of config.toml.
  • Kubelet Configuration: Simultaneously, the kubelet must be configured with cgroupDriver: systemd in its KubeletConfiguration file.

4. Runtime Handlers and RuntimeClasses

The CRI allows the kubelet to request different "types" of containers (e.g., standard containers vs. sandboxed VMs) via RuntimeClasses.

  • Mapping: Inside containerd's config.toml, you define specific runtimes sections. Each section corresponds to a handler name.
  • Interaction: When a Pod is scheduled with a specific runtimeClassName, the kubelet sends that class name over gRPC. containerd looks up the corresponding handler in its config to determine which low-level binary (like runc or kata-runtime) to invoke.

5. Networking Responsibility

While the kubelet defines the networking requirements (IPs, Ports), containerd is responsible for the actual execution of network setup via CNI (Container Network Interface).

  • containerd loads CNI plugins to configure the network namespace.
  • For example, containerd internally configures the loopback interface (lo) for Pod sandboxes using a CNI loopback plugin.

In summary, the kubelet instructs containerd what to do (create a sandbox, start a container) via the CRI gRPC socket, but containerd relies on its internal cri plugin and config.toml to determine how to execute those instructions using low-level runtimes and CNI plugins.