Jenkins + k8s: Building Docker Image without Docker
As it was announced in 2020, Kubernetes deprecates Docker as container runtime in v1.20 and it will be removed starting from v1.24 which is getting released soon. In case your Jenkins pipelines are using the Kubernetes plugin it is highly likely that pipelines are using underlying Docker to build images, hence you might have a problem.

What are the options post 1.24 release if you still want to use Kubernetes plugins to build Docker images?
User Docker
The most obvious one is to continue using docker to build the images.
Yes, you can still have Docker installed on the Kubernetes Worker Node even if it is not used by kubelet as a container runtime. In this case, no changes to the pipelines are required.
However, bear in mind that this is only applicable in the self-managed clusters where you have full control over Worker Nodes.
What if you do not want to use Docker or you are using managed Kubernetes, hence the choice of the container runtime is outside of your control?
Use BuildKit
Using BuildKit will require some minor changes to the pipeline Jenkinsfile.
Let’s go through the required changes.
Consider we have the following pipeline:
It is a simplified version of the real pipeline where all the test, static code analysis and tagging steps are removed. So, just two stages left: build docker image and deploy it to kubernetes cluster.
In order to use BuildKit you have to replace docker:dind
container with moby/buildkit:master
. In this case, mounting of /var/run/docker.sock
is not required anymore.
However, in order to let BuildKit to push your image to registry, first you need to create a new secret for Docker configuration file (if your registry requires authentication). In the following example, we assume that we are pushing image to Docker Hub and our credentials are as follows: username =user
and password =password
.
First, encode username/password pair into base64 string. To do so, run the following command:
echo -n user:password | base64 -w 0
assuming that you run it on Linux or echo -n user:password | base64
on Mac. Copy resulting string (dXNlcjpwYXNzd29yZA==
) and add it to the following json file (config.json) into the “auth” property:
Convert content of the config.json file into base64 string running the following command: cat config.json | base64 -w 0
and add resulting string into kubernetes secret manifest secret.yaml
. We assume that your Jenkins Kubernetes plugin uses jenkins namespace for pipeline pods.
Then apply it to the Kubernetes cluster: kubectl apply -f secret.yaml
.
Now, we are ready to modify the podTemplate. It should look like following:
Note that we renamed the container from docker to buildkit and also removed the command definition.
Also, we mounted a secret with the Docker’s config.json into /root/.docker directory so it will be available to in buildkit container as /root/.docker/config.json.
And next, we replace the Build Docker Image
stage with the following:
That is pretty much it.
Using AWS ECR
One last point, in case you use AWS ECR. In this case, your Docker config.json file will be different:
This config file does not have secrets hence can be stored as ConfigMap.
If you use ConfigMap, volume mount in the podTemplate should change from: secretVolume(secretName: 'docker-config', mountPath: '/root/.docker')
to configMapVolume(configMapName: 'docker-config', mountPath: '/root/.docker').
However, in this case, you need to have ECR Login Helper available in the path so the Build Docker Image
stage will look like the following:
There are a number of edge cases such as K8S cluster behind the proxy or container registry server using self-signed certificates requiring a slightly more complex configuration which we will discuss later.