Skip to content

Volume Access Modes

How exactly do I share data between multiple running Pods?

To share data between multiple running Pods in Kubernetes, you must utilize a PersistentVolume (PV) and a PersistentVolumeClaim (PVC) configured with the appropriate Access Mode. The Access Mode defines exactly how the underlying storage resource can be mounted to the nodes within your cluster.

However, Access Modes are not magically enforced software barriers; they are strict reflections of the physical or architectural capabilities of the underlying storage provider you are using.

Here is the architectural breakdown of how ReadWriteOnce and ReadWriteMany differ, and why your choice of storage infrastructure dictates which one you can use.

ReadWriteOnce (RWO)

The ReadWriteOnce access mode means the volume can be mounted as read-write by a single node.

A common misconception is that "Once" means only a single Pod can access it. In reality, ReadWriteOnce restricts access at the Node level, meaning multiple Pods can concurrently read from and write to the same volume, provided that all of those Pods are scheduled on the exact same worker node. If the Kubernetes scheduler attempts to place a new Pod on a different node that requires this same volume, the volume attachment will fail because the first node holds the exclusive read-write lock.

(Note: If you require strict single-writer access where only one Pod in the entire cluster can access the storage at a time, Kubernetes provides a separate ReadWriteOncePod access mode).

ReadWriteMany (RWX)

The ReadWriteMany access mode means the volume can be mounted as read-write by many nodes simultaneously.

This is the required mode if you are running a highly-available, replicated application (like a web server Deployment scaled to 5 replicas) where Pods are spread across multiple different physical or virtual machines, but all need concurrent read-write access to the exact same dataset.

Why Cloud Block Storage (like AWS EBS) cannot support ReadWriteMany

Standard cloud block storage, such as Amazon Elastic Block Store (EBS), Azure Disk, or Google Persistent Disk, operates at the lowest level of the storage stack. When Kubernetes attaches an AWS EBS volume, it mounts it directly to a single kubelet's host machine as a raw block device.

Once attached, the host operating system typically formats this block device with a standard filesystem like ext4 or xfs. These local filesystems are fundamentally entirely unaware of the network or of other operating systems. If AWS were to allow you to attach that same EBS block device to two different Kubernetes nodes simultaneously, both Linux kernels would attempt to manage the filesystem's metadata (like allocation tables and inodes) independently. This would instantly lead to severe data corruption. Therefore, by architectural design, an AWS EBS disk can only ever be mounted as read/write once.

Why Distributed File Systems (like NFS or EFS) can support ReadWriteMany

To achieve ReadWriteMany, you must decouple the filesystem management from the individual worker nodes. This is where Network File Systems (NFS) or managed equivalents like AWS Elastic File System (EFS) come in.

An NFS volume allows an existing network-attached share to be mounted into a Pod. Because NFS is fundamentally designed for distributed access, the centralized NFS server (or EFS service) acts as the single source of truth. It handles all the complex file-locking, metadata updates, and concurrency control on the backend.

Because the worker nodes are simply acting as network clients rather than managing the raw disk blocks, NFS can be safely mounted by multiple writers simultaneously across any number of nodes. This architectural capability is why NFS and Azure File volumes natively support the ReadWriteMany access mode in Kubernetes.

Based on Kubernetes v1.35 (Timbernetes). Changelog.