Troubleshooting Kubernetes Deployment Issues


Troubleshooting CrashLoopBackOff Issue using sample Deployment
Kubernetes is a powerful container orchestration tool that automates the deployment, scaling, and management of containerized applications. However, It's a common thing we might found issues during deployments of applications. Here we will discuss about most common issue of Kubernetes which CrashLoopBackOff state. We will first deploy a faulty container and then learn how to find the issue via containers logs and then we will fix the issue. We will take a sample example for all these work.
Understanding CrashLoopBackOff state
The CrashLoopBackOff state is a sign that a container in our pod is not able to start. This can happen due to various reasons, such as application errors, resource quota limitations, misconfigurations of application. Depending on the error we got via log we will troubleshoot the issue.
Here we will Deploy a Faulty Pod using below code.
apiVersion: apps/v1 kind: Deployment metadata: name: faulty-image spec: replicas: 1 selector: matchLabels: app: faulty-image template: metadata: labels: app: faulty-image spec: containers: - name: faulty-nginx-container image: nginx:latest command: ["nginx"] args: ["-g", "daemon off;", "-c", "/etc/nginx/nonexistent.conf"] ports: - containerPort: 80
Here we have used nginx container which can't be faulty image but for making it a faulty we are giving args to load the nonexisting file named nonexistent.conf under nginx container, which it won't able to found and will give error. Let's apply this object and view the logs for the error.
kubectl apply -f <deployment-file-name>
Now let's check the Pod status by below command.
kubectl get pods
We will get the below output.
We can see in above output that it is giving CrashLoopBackoff as Status. Now Let's try to view the logs generated by container by below command.
kubectl logs <pod-name> <container-name>
Here in above output we can clearly see that it is giving No such file or directory found error it is giving which is the reason it is giving this error.
- Troubleshooting the issue
Now for troubleshooting this issue either we need to create this file under the container or we can remove this configuration from deployments so I am removing this configuration in deployments by below code.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fixed-image
spec:
replicas: 1
selector:
matchLabels:
app: fixed-image
template:
metadata:
labels:
app: fixed-image
spec:
containers:
- name: fixed-nginx-container
image: nginx:latest
command: ["nginx"]
args: ["-g", "daemon off;"]
ports:
- containerPort: 80
Here I have removed that file configuration and also changed the name from faulty-image to fixed-image name and now Let's apply the code again by below code.
kubectl apply -f <deployment-file-name>
Now let's see the Status again by below command.
kubectl get pods
Now we can see in above output that fixed-image is up and running. Let's verify this from container logs also.
kubectl logs <pod-name> <container-name>
We can see that in above output from now container is giving correct logs and running successfully.
This is the way we can troubleshoot CrashLoopBackOff issue, we will now discuss other troubleshooting issue in upcoming module.
Subscribe to my newsletter
Read articles from Gaurav Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gaurav Kumar
Gaurav Kumar
I am working as a full time DevOps Engineer at Tata Consultancy Services from past 2.7 yrs, I have very good experience of containerization tools Docker, Kubernetes, OpenShift. I have good experience of using Ansible, Terraform and others.