Appearance
Scenario: PVC Stuck Terminating
Deleting a PVC does not mean it is gone. If a Pod is still using it, the Finalizer will block deletion forever.
Why is my PVC stuck in "Terminating" state even after deleting the Pod and Volume?
Based on the Kubernetes documentation, your PersistentVolumeClaim (PVC) is likely stuck in the Terminating state because of the Storage Object in Use Protection feature.
This feature adds a specific Finalizer (kubernetes.io/pvc-protection) to the PVC to ensure data is not lost while a Pod is still actively using the volume. Even if you have issued a delete command for the Pod, the PVC will remain Terminating until the Pod object is completely removed from the API server.
Here is the breakdown of why this happens and how to resolve it:
1. The "Ghost" Pod (Pod is Stuck Terminating)
You mentioned you "deleted the Pod," but in Kubernetes, issuing a delete command does not mean the object is instantly gone.
- The Mechanism: When you delete a Pod, it enters the
Terminatingphase. It remains in this phase until thekubeleton the node confirms that all containers have stopped and the volume has been unmounted. - The Problem: If the node is unresponsive (e.g., network partition, hardware failure) or if a
preStophook is hanging, thekubeletcannot report success. The Pod object remains in the API server effectively forever (or until the node is evicted). - The Result: Because the Pod object still exists (even in
Terminatingstate), the PVC Controller sees the volume as "in use" and refuses to remove thekubernetes.io/pvc-protectionfinalizer, blocking the PVC deletion.
2. Hidden Usage (Multiple Consumers)
It is possible that another Pod is referencing the same PVC.
- Example: If you deleted a Pod managed by a
DeploymentorStatefulSet, the controller might have immediately created a replacement Pod that attached to the same PVC before the old one fully terminated. - Check: Run
kubectl describe pvc <pvc-name>and look at theMounted Byfield (if available in your version/output) or checkkubectl get pods --all-namespaces -o wideto ensure no other pods are bound to it.
Troubleshooting Steps
Step 1: Confirm the Finalizer
Run the following command to confirm the pvc-protection finalizer is present:
bash
kubectl describe pvc <your-pvc-name>Look for Finalizers: [kubernetes.io/pvc-protection] in the output.
Step 2: Find the Lingering Pod
Check for Pods that are still using the claim. Crucially, look for Pods in the Terminating or Unknown state:
bash
kubectl get pods --all-namespaces | grep TerminatingStep 3: Force Delete the Pod (If Node is Dead)
If the Pod is stuck terminating because the node is down (Node status NotReady or Unknown), you must force delete the Pod to remove the object from the API server. This releases the lock on the PVC.
bash
# Only run this if you are sure the node is down or the process is dead
kubectl delete pod <pod-name> --grace-period=0 --forceNote: Force deletion immediately removes the API object without waiting for the kubelet.
Step 4: Manual Finalizer Removal (The "Break Glass" Option)
If you are absolutely certain no Pods are using the volume and the PVC is still stuck, you can manually patch the PVC to remove the finalizer.
Warning
This bypasses safety checks and can lead to data corruption if the volume is actually in use.
bash
kubectl patch pvc <your-pvc-name> -p '{"metadata":{"finalizers":null}}'