Kubernetes is an open source software for managing and deploying containerized applications across many servers. The smallest deployable unit of computing in Kubernetes is a pod , which contains one or more containers. A container is a packaged software application that includes all the necessary dependencies and binaries required to run the application.
When you deploy a pod in Kubernetes, Kubernetes creates the container from the image you specify in the pod object. However, there are situations where Kubernetes fails to create the container due to errors encountered while creating it. The errors include
CreateContainerError
and
CreateContainerConfigError
.
These errors can be tricky, so it’s important to understand their underlying causes to make it easier to avoid them. It’s also important to learn how to fix them, so you can resolve them when they occur. In this article, you’re going to learn more about these errors—how to identify them, what causes them, and how to fix them.
What are createcontainerconfigerror and createcontainererror?
CreateContainerConfigError and CreateContainerError are errors that occur when a Kubernetes container is transitioning from a pending state to a running state. When a container enters a running state, it validates the deployment configuration to ensure that it is configured properly.
During this validation state, if a configuration such as the ConfigMap is missing, the CreateContainerConfigError comes up. CreateContainerError, on the other hand, occurs when your pod references a PersistentVolume that isn’t properly configured, or when Kubernetes tries to create a pod from the pod manifest file, and the container referenced in the pod’s manifest file has an empty or invalid ENTRYPOINT.
In the next sections of this article, you will be learning more about the causes of these errors, as well as how to resolve CreateContainerConfigError and CreateContainerError.
Identifying the createcontainerconfigerror
Let’s deploy a simple MySQL pod that relies on the following Kubernetes objects .
- Secrets to store your username and password.
- Configmap to store insensitive data.
Create a file named
mysql_pod.yaml
and paste the following content:
In your terminal, run the code below:
You will get the following response:
Get the pod you just created:
You will receive a response with the status
CreateContainerConfigError
.
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| mysql | 0/1 | CreateContainerConfigError | 0 | 7s |
Causes of createcontainerconfigerror and solution
The main cause of
CreateContainerConfigError
is either a
secret
or ConfigMap that’s referenced in your pod, but doesn’t exist.
Continuing with the example project, run the following command:
You’ll receive a response describing the MySQL pod in detail. Scroll to the end of the response, where you’ll see an
Events
section. This is a
list of events
that have occurred in the process of creating the pod. You’ll see an event with the type
warning
, and the reason will be
Failed
. The message associated with that event indicates the cause of the
CreateContainerConfigError
. In this case, the message states that the secret wasn’t found, as seen below.
...
...
Events:
| TYPE | REASON | AGE | FROM | MESSAGE |
|---|---|---|---|---|
| ---- | ------ | --- | ---- | ------- |
| Normal | Scheduled | 46s | default-scheduler | Successfully assigned default / mysql to minikube |
| Normal | Pulled | 14s (x5 over 44s) | kubelet | Container image "mysql:5.6" already present on machine |
| Warning | Failed | 14s (x5 over 44s) | kubelet | Error: secret "mysql-secret" not found |
This indicates that Kubernetes wasn’t able to locate the secret named “mysql-secret”. To verify that, run the following.
You will receive the following response:
As you can see from the response, the cause of the error is that you don’t have a secret object named
my-secret"
present in your cluster. When you were creating the
MySQL
pod manifest file, you referenced a secret and configMap in the env attribute without actually creating them.
Let’s create the secret object imperatively.
Then delete your MySQL pod and recreate it.
After recreating the MySQL pod, run the command below to view the pod in detail.
This time around, you have the following error message, similar to the error you had previously.
Using the same approach used to fixing the missing secret, create a configMap for your pod.
Delete and recreate the MySQL pod.
View the newly created mysql pod.
In the response, you will see that the status is no longer
CreateContainerConfigError
, but
Running
.
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| mysql | 1/1 | Running | 0 | 66s |
Now that you know the causes and fixes for
CreateContainerConfigError
, you’ll look at
CreateContainerError
.
Identifying the createcontainererror
To explain how to identify a
CreateContainerError
, let’s create a simple node app pod. Create a file name
nodeapp.yaml
in your working directory and paste the following content:
The image used to create the pod will be explained later on. Create the pod object by running the following:
The nodeapp is created. Run the following command to view the pod:
It will return this response:
| NAME | READY | STATUS | RESTARTS | AGE | IP | NODE | NOMINATED NODE | READINESS GATES |
|---|---|---|---|---|---|---|---|---|
| nodeapp | 0/1 | CreateContainerError | 0 | 18s | 172.17.0.6 | minikube | <none> | <none> |
Let’s further identify the main cause of this error by running the following command:
Scroll to the end of the response, where you’ll see something similar to the response below.
...
Events:
| TYPE | REASON | AGE | FROM | MESSAGE |
|---|---|---|---|---|
| ---- | ------ | --- | ---- | ------- |
| Normal | Scheduled | 55s | default-scheduler | Successfully assigned default / nodeapp to minikube |
| Normal | Pulled | 32s | kubelet | Successfully pulled image "tolustar/bad-node-app" in 2.925695647s |
| Warning | Failed | 17s (x3 over 51s) | kubelet | Error: Error response from daemon: No command specified |
...
In the Events section, you will see an event that has the event type of warning, and the reason listed as failed. The message section states the cause of the error, which is “Error response from daemon: No command specified”.
Causes of createcontainererror and solution
In this section, you’ll be learning the cause of this error, as well as about other factors that can cause the
CreateContainerError
.
Command not available
In the
nodeapp
pod that you created, the image
tolustar/bad-node-app
was specified. The image is a node script that was built into an image using the following Dockerfile configuration.
The above Dockerfile has an empty
ENTRYPOINT
. In Kubernetes, the command attribute is linked to the Docker ENTRYPOINT, which means you can override the container ENTRYPOINT by configuring the
command
attribute in your pod object. In the previous example, the
command
attribute wasn’t configured, and the Dockerfile used in building the bad-node-app has an empty ENTRYPOINT. This is what caused the
CreateContainerError
when you created the pod.
The first thing you need to investigate when you encounter
CreateContainerError
is to check that you have a valid ENTRYPOINT in the Dockerfile used to build your container image. However, if you don’t have access to the Dockerfile, you can configure your pod object by using a valid command in the
command
attribute of the object.
Delete the pod you created using
kubectl delete pod nodeapp
, and update your pod manifest file,
nodeapp.yaml
, to include a valid command attribute, as seen below.
Create the new nodeapp by running
kubectl create -f nodeapp.yaml
. Then run
kubectl get pod nodeapp -o wide
, and you will get a response that indicates your pod is running, as seen below.
| NAME | READY | STATUS | RESTARTS | AGE | IP | NODE | NOMINATED NODE | READINESS GATES |
|---|---|---|---|---|---|---|---|---|
| nodeapp | 1/1 | Running | 0 | 13s | 172.17.0.6 | minikube | <none> | <none> |
Issues mounting a volume
If your volume object is not well configured, it can cause
CreateContainerError
when it is mounted on a pod. To illustrate this, you’ll create a simple nginx app, with a PersistentVolume (PV) and PersistentVolumeClaim (PVC).
In your working directory, create a file called
nginx-storage.yaml
and paste in the following content.
Then create the PV and PVC from the above manifest file.
Create another file named
nginx.yaml
, and paste in the following content:
Create the nginx pod using the above manifest file.
View the nginx pod.
You will see the following response.
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| nginx | 0/1 | CreateContainerError | 0 | 8s |
When you use the
kubectl describe
command, it will give more details regarding the
CreateContainerError
status.
Towards the end of the response, you’ll see the following:
...
...
Events:
| TYPE | REASON | AGE | FROM | MESSAGE |
|---|---|---|---|---|
| ---- | ------ | --- | ---- | ------- |
| Normal | Scheduled | 73s | default-scheduler | Successfully assigned default / nginx to minikube |
| Normal | Pulled | 70s | kubelet | Successfully pulled image "nginx" in 3.2106753s |
| Normal | Pulled | 66s | kubelet | Successfully pulled image "nginx" in 3.3019907s |
| Normal | Pulled | 51s | kubelet | Successfully pulled image "nginx" in 2.8763731s |
| Normal | Pulled | 37s | kubelet | Successfully pulled image "nginx" in 2.884672s |
| Normal | Pulled | 22s | kubelet | Successfully pulled image "nginx" in 2.9257591s |
| Normal | Pulling | 9s (x6 over 73s) | kubelet | Pulling image "nginx" |
| Warning | Failed | 5s (x6 over 70s) | kubelet | Error: Error response from daemon: create \mnt\data: "\\mnt\\data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path |
As you can see, there’s an issue with the configured path in the PersistentVolume that was created. Now, you’ll modify it and remove the invalid characters.
Delete the nginx pod, PV, and PVC.
Edit the
nginx-storage.yaml
file, changing line 13 from
path: "\\mnt\\data" to path: "/mnt/data"
. Save the file.
Recreate the PV, PVC, and pod from the
nginx-storage
and
nginx
manifest files.
View the status of the pod again.
You now have a running pod.
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| nginx | 1/1 | Running | 0 | 47s |
Container runtime not cleaning up old containers
Another time you might encounter
CreateContainerError
is when a node’s
container runtime
doesn’t clean up old containers. If this has happened, when you try to create a new pod object using the same container name as one of the old containers, Kubernetes won’t be able to create the pod, and you’ll get the
CreateContainerError
. If you check the pod description using the
kubectl describe pod [POD_NAME]
, the event attribute will have a message similar to this:
To resolve this error, you have to reinstall the node’s container runtime and re-register the affected node, where Kubernetes is trying to assign the container’s pod.
Final thoughts
Troubleshooting
CreateContainerError
and
CreateContainerConfigError
can be frustrating if you don’t understand the underlying cause of these errors. Now that you have a solid understanding of the causes of the errors though, you can confidently avoid them if they appear—or, better yet, avoid them altogether.
Monitoring events and logs generated in your cluster is very important for the health and performance of your cluster. A good monitoring tool will offer you insights that will help you improve the performance of your cluster, and will help you identify, debug, and resolve issues like the ones covered in this tutorial.