Appearance
ConfigMap Hot-Reloading
How can I update a ConfigMap mounted as a volume?
Updating a ConfigMap that is mounted as a volume in a Pod is a natively supported operation in Kubernetes. When you modify the ConfigMap, the Kubernetes control plane and the node's kubelet collaborate to eventually project the new data into the running container's filesystem.
However, as an architect or operator, you must understand the asynchronous, decoupled nature of this workflow to ensure your applications behave predictably.
Here is the comprehensive breakdown of how to perform the update, how the architecture processes it, and the real-world application design considerations.
1. Modifying the ConfigMap
To initiate the update, you modify the ConfigMap in the Kubernetes API using your preferred declarative or imperative method:
- Declarative: Update your local YAML manifest and apply it using
kubectl apply -f configmap.yaml. - Imperative: Directly edit the live object using
kubectl edit configmap <name>.
The moment this command succeeds, the updated "desired state" is persisted in etcd.
2. The Architectural Workflow (The Kubelet Sync)
The change does not instantly appear inside your Pod's filesystem. Instead, it relies on the kubelet running on the node where your Pod is scheduled.
- Caching and Synchronization: The
kubeletdoes not constantly query the API server for every file read. It uses a local cache for ConfigMaps (typically populated via a watch mechanism or TTL-based polling). On its periodic sync loop (which defaults to 1 minute), thekubeletevaluates whether the mounted ConfigMap data is fresh. - Propagation Delay: Because of this decoupled architecture, there is an inherent delay. The total time from when you update the ConfigMap in the API to when the file is updated in the container can be as long as the
kubeletsync period plus the cache propagation delay. - Atomic Updates: To prevent applications from reading partially written configuration files, the
kubeletupdates the volume atomically. It writes the new data to a temporary directory with a timestamp, and then uses a symbolic link (usually named..data) and the Linuxrename(2)system call to swap the symlink to point to the new directory.
3. The Application Responsibility (Hot-Reloading)
This is where many operational issues occur. The kubelet updates the file on disk, but it does not restart the container or send a signal to your application.
How your application handles this depends entirely on its internal architecture:
- Hot-Reloading Applications: If your application is written to watch for filesystem changes (using libraries like
inotify) or periodically polls the configuration file, it will seamlessly detect the new data and adjust its behavior without disruption. - Static Applications: If your application only reads its configuration once into memory during startup, it will remain entirely unaware of the updated ConfigMap. To force this type of application to pick up the new configuration, you must manually restart the Pods, typically by triggering a rolling update on the managing Deployment (e.g.,
kubectl rollout restart deployment <name>).
Architectural Exceptions and Best Practices
When designing your configuration management strategy, keep the following constraints and best practices in mind:
The subPath Limitation
If you mounted specific keys from the ConfigMap into the container using the volumeMounts.subPath directive, the container will not receive automatic updates. The kubelet cannot safely perform atomic symlink swaps on single files mounted via subPath. If you use subPath, you must replace the Pod to apply configuration changes.
Immutable ConfigMaps for Scale and Safety
In large-scale production environments, actively watching ConfigMaps for changes puts a heavy load on the API server. A highly recommended best practice is to use Immutable ConfigMaps by setting immutable: true in the ConfigMap spec.
With this pattern, you never update an existing ConfigMap. Instead, you:
- Create a brand new ConfigMap (e.g.,
app-config-v2). - Update your Deployment's Pod template to reference the new ConfigMap name.
- Let the Deployment automatically perform a rolling update, gracefully terminating old Pods and starting new ones with the fresh configuration.
This "configuration-as-code" pattern prevents accidental outages caused by live edits and forces a clean, reproducible application restart.