Appearance
How does the Container Runtime Interface interact with containerd specifically?
Following os the specific mechanical and configuration interactions between the Kubernetes kubelet and containerd via the Container Runtime Interface (CRI).
The interaction relies on a client-server model where the kubelet acts as the gRPC client and containerd acts as the gRPC server.
1. The Communication Channel: gRPC over Sockets
The kubelet does not interact with containerd directly through system calls or CLI commands. Instead, it connects to a specific endpoint exposed by containerd to send Protocol Buffer messages via gRPC.
- Linux Endpoint: The default communication channel is a Unix domain socket located at
unix:///run/containerd/containerd.sock. - Windows Endpoint: The default communication channel is a named pipe located at
npipe://./pipe/containerd-containerd.
When the kubelet starts, it attempts to detect the runtime. If you are using containerd, you may need to explicitly configure the kubelet with --container-runtime-endpoint=unix:///run/containerd/containerd.sock to ensure it connects to the correct socket.
2. Internal Architecture: The cri Plugin
Crucially, containerd is not native to Kubernetes out of the box; it uses an internal plugin architecture. To satisfy the Kubernetes CRI requirements, containerd must have its CRI integration plugin enabled.
- Configuration File: The interaction logic is defined in
/etc/containerd/config.toml. - Enabling CRI: You must ensure that
criis not included in thedisabled_pluginslist within this configuration file. Ifcriis disabled, the kubelet cannot communicate withcontainerd.
3. Resource Management: Cgroup Driver Alignment
One of the most critical interaction points between the kubelet and containerd is the management of Linux control groups (cgroups) for resource isolation (CPU/Memory). A mismatch here is a frequent cause of node instability.
- The Requirement: Both the kubelet and
containerdmust use the same cgroup driver. - Systemd Integration: On modern Linux distributions using systemd (and cgroup v2), the recommended driver is
systemd. - Containerd Configuration: You must explicitly configure
containerdto use the systemd driver by settingSystemdCgroup = truein the[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]section ofconfig.toml. - Kubelet Configuration: Simultaneously, the kubelet must be configured with
cgroupDriver: systemdin itsKubeletConfigurationfile.
4. Runtime Handlers and RuntimeClasses
The CRI allows the kubelet to request different "types" of containers (e.g., standard containers vs. sandboxed VMs) via RuntimeClasses.
- Mapping: Inside
containerd'sconfig.toml, you define specificruntimessections. Each section corresponds to a handler name. - Interaction: When a Pod is scheduled with a specific
runtimeClassName, the kubelet sends that class name over gRPC.containerdlooks up the corresponding handler in its config to determine which low-level binary (likeruncorkata-runtime) to invoke.
5. Networking Responsibility
While the kubelet defines the networking requirements (IPs, Ports), containerd is responsible for the actual execution of network setup via CNI (Container Network Interface).
containerdloads CNI plugins to configure the network namespace.- For example,
containerdinternally configures the loopback interface (lo) for Pod sandboxes using a CNI loopback plugin.
In summary, the kubelet instructs containerd what to do (create a sandbox, start a container) via the CRI gRPC socket, but containerd relies on its internal cri plugin and config.toml to determine how to execute those instructions using low-level runtimes and CNI plugins.