Kubernetes object stuck in Terminating state
Kubernetes objects can get stuck as Terminating when undeployed
While undeploying a service or app from your Kubernetes cluster, it may happen that an object from that service or app gets stuck in a Terminating state. It is possible to unstuck such a Terminating object by tinkering a bit with its effective configuration deployed in the cluster.This chapter uses an older cert-manager installation to exemplify this issue and it possible solution.
Note
This solution has not been tested for the current version of this guide
Since this issue was detected when working with an older version of K3s (v1.22.3+k3s1), it might be possible that this is issue no longer happens in modern K3s or Kubernetes clusters.
Scenario
Suppose you have deployed cert-manager v1.5.3 in your cluster applying, with kubectl, the default official cert-manager.yaml file:
To undeploy cert-manager you would just
deletethe same YAML manifest used in the deployment:$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yamlThis command will output a bunch of lines but will not return control to the shell. That is because there is one object that cannot delete and cannot finish untill that object is properly terminated.
Open another shell and use
kubectlto locate the problematic object. In this example with cert-manager, it happened to be the namespace cert-manager created for itself:$ kubectl get namespaces NAME STATUS AGE default Active 43h kube-system Active 43h kube-public Active 43h kube-node-lease Active 43h metallb-system Active 22h cert-manager Terminating 144mNo matter what you do, not even rebooting the entire cluster, will get rid of that
cert-managernamespace.
Solution
The procedure explained next has been taken from this article by Craig Newton and adapted for this example with cert-manager:
Dump the
cert-managernamespace’s declaration as a JSON file:$ kubectl get namespaces cert-manager -o json > cert-manager_namespace.jsonOpen with a text editor the
cert-manager_namespace.jsonfile. It should look like below:{ "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cert-manager\"}}\n" }, "creationTimestamp": "2021-09-01T12:08:32Z", "labels": { "kubernetes.io/metadata.name": "cert-manager" }, "name": "cert-manager", "resourceVersion": "33330", "uid": "d77020c8-433e-4abd-a95d-9568dcada8eb" }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Terminating" } }Only remove the
"kubernetes"string withinspec.finalizers, ensuring to leave empty the array declared in thefinalizersparameter:{ ... "spec": { "finalizers": [ ] }, ... }Finally, you have to execute a
replacecommand withkubectl:$ kubectl replace --raw "/api/v1/namespaces/cert-manager/finalize" -f cert-manager_namespace.jsonNotice that Kubernetes
/api/v1path specified, in this case, for thecert-managernamespace. If the resource or object to modify were anything else, you would have to specify the correct API path for it.
This procedure will effectively unstick the namespace and allow the kubectl delete process to end, returning the control to the prompt after a few seconds.
Note
This procedure is valid for all kind of Kubernetes objects or resources, not just namespaces
Just be aware that you will need to find the correct API path for the Kubernetes (or non-Kubernetes) object or resource you are trying to fix with this solution.