Have you ever been stuck in a situation when you were trying to delete a namespace but it was stuck in the Terminating state?
Recently, I faced this situation and how I fixed it is by following the steps -
1. I have dumped the contents of the namespace in a temporary file in JSON format -
kubectl get namespace <terminating-namespace> -o json > namespace.json
2. Edited the JSON file and removed the finalizers array block which is under spec.
"spec": {
"finalizers": [
"kubernetes"
]
}
3. After removing the finalizer block the spec section was like this -
"spec": {
}
4. Then, I applied the changes with the command -
kubectl replace --raw "/api/v1/namespaces/<terminating-namespace>/finalize" -f ./namespace.json
And at last, I found that there was no namespace stuck in the Terminating state and it was deleted.
But why it was stuck in the Terminating state and how did removing the finalizer from the spec section worked?
A finalizer is a special metadata key that tells Kubernetes to wait until a specific condition is met before it fully deletes a resource.
So when you run a command like kubectl delete namespace NAMESPACE_NAME, Kubernetes checks for a finalizer in the metadata.finalizers field. If the resource defined in the finalizer cannot be deleted for any reason, then the namespace is not deleted either.
This puts the namespace into a terminating state awaiting the removal of the resource, which never occurs.
When an object has been terminating for an excessive time, check its finalizers by inspecting the metadata.finalizers field in its YAML.
In the abolve image you can see my-namespace was in terminating state and got stuck. and i followed steps 1-4 and resolved this issue. however there are some other way as well to reslove this issue.
To manually delete the respective resources:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n <namespace>
kubectl get apiservice -n <namespace>
delete the one
kubectl get apiservice | grep False
Another edge case is that if you have a custom crd that is stuck and is causing the namespace to be stuck.
You can get all resources for a namespace by running:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
This gives you custom crd resources as well compared to the standard
kubectl get all -n <namespace>
Generally there are crds which prevent this ns deletion.
Do kubectl get crd -n <namespace>
Then do kubectl delete crd <name> -n namespace.
All the ways can be use to delete ns stuck in terminating state.
Comments