Skip to content

CSI Workflow

What is the exact workflow of CSI dynamic provisioning, and what are the roles of the sidecars?

Kubernetes abstracts storage provisioning using the Container Storage Interface (CSI), which allows storage vendors to develop out-of-tree plugins independently of the core Kubernetes codebase. To prevent the core Kubernetes control plane from needing to know the proprietary APIs of every storage vendor, Kubernetes employs specialized, independent sidecar containers—such as the external provisioner and the external attacher. These sidecars bridge the declarative Kubernetes API with the imperative gRPC calls expected by the vendor's CSI driver.

Dynamic provisioning automatically creates storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage manually. Here is the exact, step-by-step architectural workflow of how a dynamic raw block volume is provisioned, attached, and exposed to a Pod.

1. The Declarative Request (PVC and StorageClass)

The workflow begins when an application developer submits a PersistentVolumeClaim (PVC) to the Kubernetes API.

  • Volume Mode: To request a raw block volume rather than a standard formatted disk, the developer explicitly sets the volumeMode field in the PVC to Block.
  • Storage Class: The PVC specifies a storageClassName that maps to a StorageClass configured by the cluster administrator. The StorageClass defines which CSI provisioner should be used and passes opaque, vendor-specific parameters to it.

2. The Role of the External Provisioner

The external-provisioner sidecar acts as the orchestrator for volume creation.

  • Watching and Intercepting: It continuously watches the Kubernetes API for new PVCs. When it detects a PVC requesting a StorageClass that matches its specific driver name, it springs into action.
  • Physical Provisioning: The external-provisioner reads the parameters from the StorageClass and makes a gRPC call to the storage vendor's CSI driver to physically provision the block storage on the backend infrastructure.
  • Object Creation: Once the physical storage is successfully created, the external-provisioner automatically generates a PersistentVolume (PV) object in the Kubernetes API to represent the new asset.
  • Binding: A control loop in the Kubernetes control plane then binds the newly created PV to the user's PVC. The internal binding matrix strictly enforces that a PVC requesting Block mode will only bind to a PV that also specifies Block mode.

(Architectural Note: If the StorageClass is configured with volumeBindingMode: WaitForFirstConsumer, the external-provisioner will delay physical provisioning and binding until the Kubernetes scheduler has chosen a feasible Node for the Pod. This ensures topology-constrained storage is created in the correct Availability Zone).

3. The Role of the External Attacher

Once the Pod is scheduled to a Node and the volume exists, the storage must be physically attached to the host machine.

  • The Intent to Attach: The core Kubernetes attach/detach controller consults the CSIDriver object to determine if this specific driver requires an explicit attach operation (via the attachRequired boolean). If required, the controller creates a VolumeAttachment object in the Kubernetes API. This object captures the strict intent to attach the specified volume to the designated Node.
  • Execution: The external-attacher sidecar actively watches for new VolumeAttachment objects. Upon detecting one, it coordinates with the CSI volume driver, sending a gRPC command to instruct the backend storage system to attach the volume to the physical Node.
  • Status Update: Once the attach operation completes successfully on the backend, the external-attacher updates the status.attached field of the VolumeAttachment object to true.

4. Node-Level Presentation (The Kubelet)

The Kubernetes attach/detach controller waits until it sees that the VolumeAttachment status has been marked as attached by the external-attacher before it proceeds.

  • Once confirmed, the kubelet on the target Node takes over.
  • Because the volume was requested as a raw block device (volumeMode: Block), the kubelet intentionally skips formatting the disk with a filesystem like ext4 or XFS.
  • The volume is presented directly into the Pod as a raw block device, providing the application with the lowest-latency, fastest possible access to the volume.

Ultimately, the application running inside the container must contain its own internal logic to manage and write to the raw block device, as there is no filesystem layer between the Pod and the storage.

Based on Kubernetes v1.35 (Timbernetes). Changelog.