Kubernetes is deprecating Docker as a container runtime after the v1.20 release. On a high-level, Kubernetes will show the deprecation message after upgrading the cluster to v1.20, and Kubernetes plans to remove Docker container runtime support as early as the v1.23 release.
Technically, Kubernetes is not removing Docker as a container runtime. Rather, it is deprecating dockershim, a component in Kubelet that helps communicate to Docker to create/delete containers. When the dockershim is removed, Kubelet has no way to communicate to Docker; hence deprecating Docker.
Before we understand the deprecation part, let us talk about the basics:
- What are Container Runtimes?
- Is Docker a high-level or low-level container runtime?
- Container Runtime Interface between Kubernetes and Docker
What are Container Runtimes?
Container runtime is software that maintains and administers the containers. There are two types of container runtimes, high-level and low-level runtimes. Consider the high-level runtime manages the low-level runtime, where the low-level runtime focuses on just running containers.
High-level container runtimes have elements like a daemon, API, and an interactive CLI. These elements help in managing, unpacking, and passing the container image to the low-level container runtime. In contrast, low-level container runtimes are responsible for the actual mechanics of running containers.
Some examples of high-level container runtimes are Docker, containerd, CRI-O, etc., while runC, RailCar, lxc, etc., are low-level container runtime.
Is Docker a high-level or low-level container runtime?
Docker is a widely-used OCI compliant open source container runtime. It manages the container lifecycle by building, packaging, sharing, and running the containers, plus it has additional features like volumes and networking.
Docker is based on client/server architecture and has both high-level (Docker Daemon and Containerd)and low-level container runtimes. It comprises three primary components Docker Daemon, Docker REST API, and Docker CLI. The Docker CLI uses the Docker API to interact with the Docker Daemon to manage objects, such as images, containers, networks, and volumes.
Now, let us see an example of how Docker creates a container; if a user executes a Docker command from the CLI,
docker container run -it — name <container-name> <image>:<image-tag>
the Docker client talks to the Daemon’s endpoint with an API payload. Once the Daemon accepts the payload, it invokes and forwards the API payload to the containerd (high-level runtime). Containerd then unpacks the image and communicates to the runC (low-level runtime) with an OCI bundle to create a container. runC interacts with the host kernel and starts a container.
Note: In theory, runC does not have any idea about images. It cannot execute a command runc run nginx:latest to create a container using an image. Instead, runC expects an OCI bundle, i.e., a root filesystem and a config.json file.
Now we know what and how Docker works, let us know how it is tied to Kubernetes.
Container Runtime Interface between Kubernetes and Docker
Initially, Kubernetes used Docker as the only container runtime to create/delete pods on the worker nodes. Given that the container runtime space has been rapidly evolving, Kubernetes has developed a plugin API called Container Runtime Interface (CRI) to connect the kubelet to various container runtimes.
As an example, cri-containerd (now integrated within containerd and maintained by containerd) is a CRI plugin between containerd and kubelet.
For any container runtime to work with Kubernetes, they need to be CRI and OCI compliant. Containerd and CRI-O runtimes are CRI and OCI compliant, whereas Docker is just an OCI compliant runtime. Given Docker’s broad user base, Kubernetes has created a temporary solution as a part of the Kubelet code called dockershim to interact with the Docker engine.
Suppose Docker engine is used as container runtime on the Kubernetes worker nodes as represented in the above image. In that case, the Kubelet communicates the container creation request to dockershim through gRPC as they run on the same host, and dockershim forwards the request to Docker daemon, which again rerouted to containerd to invoke the OCI binary(runC) with an OCI bundle to create a container.
Note: The dockershim is an alternate for Docker’s UX developed by Kubernetes to act as a bridge between Kubelet and Docker.
Well, those are the basics about Docker and how it works with the Kubernetes CRI.
Let us get back to the topic on why Kubernetes has deprecated Docker as a container runtime,
- Docker seems fine, why the deprecation?
- If not Docker, what are the other options?
- What if I still need Docker as my Kubernetes runtime?
- Who/What is impacted by this change?
Docker seems fine, why the deprecation?
Although Docker provides elements like UX, storage, networking and managing containers, Kubernetes only needs a container runtime to pull and unpack an image to an OCI bundle and invoke an OCI binary (low-level runtime) like runC to create and destroy containers.
Given what Kubernetes needs, these are the reasons why Kubernetes is deprecating Docker,
Reason 1: Docker is not a CRI compliant runtime. The CRI of Docker (a.k.a. dockershim) is part of kubelet code and runs as part of kubelet. It is tightly coupled with the kubelet’s lifecycle and has some inconsistent integrations with the Kubelet.
Reason 2: Because it is all in the Kubelet’s code, deprecating it should eliminate the kubelet’s maintenance burden on the Kubernetes community.
Reason 3: A Kubelets’s request to dockershim has to hop through various elements to create a container, while other runtimes like containerd and CRI-O has lesser jumps and has shown more reliable performance than Docker.
Reason 4: Docker’s scope is too large for Kubernetes clusters as Kubernetes does not care about native Docker services like storage and networking, and deprecating it will reduce the attack surface.
If not Docker, what are the other options?
As I said, Kubernetes can run any OCI image format on a CRI and OCI compliant container runtime. Container runtimes like containerd and CRI-O are well suited for Kubernetes’ needs and are CRI and OCI compliant.
Containerd initially started as part of Docker and was eventually moved out as an open-source container runtime. It abstracts the syscalls or kernel level details and acts as a client to the platforms such as Docker or Kubernetes to interact with the low-level container runtimes like runC. CRI-O is a light-weight container runtime purely developed precisely for Kubernetes needs.
Fun Fact: I was in the front row when Red Hat announced about not supporting Docker and introduced CRI-O for OpenShift in 2019 Red Hat’s Boston Summit.
What if I still need Docker as my Kubernetes runtime?
There is still hope to use Docker as a Kubernetes runtime, as Mirantis and Docker are partnered to maintain the dockershim by taking it over from the Kubernetes. This means you can continue to use the Docker engine as a container runtime for Kubernetes by switching from the built-in dockershim to the external one.
Who/What is impacted by this change?
The deprecation may or may not impact your environments depending on your current Kubernetes configurations and use cases.
Are the businesses impacted?
If you are using a cloud provider’s managed Kubernetes service like EKS, GKE, or AKS, most of these providers are already supporting containerd as a runtime. Otherwise, work with your cloud provider to ensure that the worker nodes use a supported container runtime.
If the Kubernetes clusters are setup on-premises with Docker, switching the container runtime will cause downtime as Kubelet requires to re-adjust its settings to the new runtime. For zero downtime node upgrades, use a rolling update method by cordoning and draining the nodes one by one while the application pods move to an active node from the cordoned node.
Do Developers / Admins need to care?
Developers, absolutely not! They can continue writing the Dockerfiles and build application images with Docker as it is an OCI compliant runtime. This means a Docker image can run on any OCI-compliant runtime, including containerd and CRI-O.
For admins, if the Kubernetes clusters are setup on-premises with Docker, you need to replace it with the supported runtime and start learning to use the crictl CLI tool to inspect and debug container runtimes.
What about the Kubernetes based CI/CD processes?
If your CI/CD pipelines are running in Kubernetes clusters using the /var/run/docker.sock for Docker image builds, its time to look for alternate options,
- Create a dedicated build machine that has Docker installed and replace the /var/run/docker.sock configuration with the dedicated machine’s configuration to run Docker builds.
- Use the OCI compatible container image builders such as Kaniko, img, or buildah as an alternative.
To summarize, Kubernetes is deprecating Docker in late 2021 since it is not the end of the world and will make things more manageable; you have plenty of time and multiple choices plus techniques to work this out.
If you are using a cloud provider’s managed Kubernetes service like EKS, GKE, or AKS, please work with your cloud provider to ensure a stable upgrade process. As Kubernetes said,
This change is coming. It’s going to cause some issues, but it isn’t catastrophic, and generally, it’s a good thing.
Hit me up on Medium, LinkedIn, or Twitter if you want to talk about upgrading your Kubernetes clusters with zero business impact. If you are working on AKS and planning to take Azure DevOps certification, here are my thoughts on it.
I hope the content is educative, and thanks for reading!! Happy Upgrade 🚀🚀