Docker: Not Faster than VMs, but More Efficient

Are containers faster than virtual machines? The answer may seem to be yes. But if you look closely, you realize that, although Docker does offer some important advantages in the realm of resource consumption, Dockerized apps do not necessarily have better performance.

The idea that Docker is faster than traditional virtualization is widespread. Consider the following statements:

These statements are not strictly false. In some instances, Dockerized apps may indeed run faster than apps running inside virtual machines.

But that is not necessarily the case. A 2014 study by IBM that compared Docker to KVM found:

Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker’s NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be considered on a case-by-case basis.

Plus, when you consider that hypervisors such as KVM and Xen deliver performance that is only about 2 percent worse than bare metal, you realize that improving performance is not really an important consideration in the first place when deciding between containers and virtual machines. Virtual machines are already running essentially as fast as bare-metal servers.

Docker’s Performance Advantage

It’s not really true, then, to say that Docker is faster than virtual machines. But what you can say about Dockerized apps is that they use resources from the host system in a more efficient manner.

With Docker, you don’t have to assign system memory or disk space to a container before you start it. You can set limits on how many resources a container can use if desired, but that does not mean that the maximum resources you allow to a container are tied up by that container whenever it is running. Rather, containers organically consume the resources they need, without requiring the host to dedicate more resources to a container than are actually necessary at any given time.

This means containers make more efficient use of system resources than virtual machines. The latter generally require memory and storage space to be assigned to them before they start. Even if the apps running inside a virtual machine are not actually using all of the resources assigned to it, the virtual machine still monopolizes those resources. That’s not efficient.

Containers also offer the advantage of not having to duplicate the processes already running on the host system. With a container, you can run only the processes you need for whichever application you want to host inside the container. In contrast, virtual machines have to run a complete guest operating system, including many of the same processes that are already running on the server host.

In these respects, containers allow for more efficient distribution of the limited resources available on a host server. In an indirect way, this can translate into better performance for containerized apps, especially as the load on the server increases capacity and optimizing resource distribution becomes important. However, it does not mean that containerized apps will run any faster or slower than those hosted by virtual machines. As long as the application in question has access to the system resources it needs, performance will be about the same whether you are using a virtual machine, Docker or bare metal.


The moral is this: Instead of saying that Docker is faster, we should say that Docker is more efficient. While efficiency and speed often go hand in hand, the former does not necessarily imply the latter. If you’re deciding whether to migrate your workloads from virtual machines to Docker, this is a crucial distinction to understand.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with and

Christopher Tozzi has 254 posts and counting. See all posts by Christopher Tozzi

3 thoughts on “Docker: Not Faster than VMs, but More Efficient

  • Christopher, you seem to miss the issue that virtualization has features like thin provisioning and memory ballooning. You don’t need to pre-allocate or assign all the disk space in advance or memory. Containers are basically isolated process and nothing more.

    They are more efficient for some stuff but with a lot of tradeoffs. If you need to run a lot of service&daemons, then a VM is better as opposed to launching one container for each app or service. Not to mention the security benefits, management and easier protection you get from VM’s. I happen to agree that some things work strangely ill on containers. Network related things, IPSec, or containers with heavy traffic seem to perform much worse than the same apps running on a traditional bare metal or VM.

    The only real benefit is the process sharing in the host you mentioned, but that comes with a big tradeoff in security if you give untrusted access to containers and someone can escape the container. Containers are also complex to manage today when it comes to even basic stuff. While everyone says for years, they will kill virtualization, virtualization if done properly can achieve many of the similar benefits (and more) if done right. You could spin up tiny VM’s with something like Alpine Linux instead of starting containers, and since most management can be automated, there is little advantage on containers over that approach. Sharing the kernel and running process will always have its cons, when it comes to security and stability, it’s a downside.

    Therefore VM’s are still the preferred option for production environments, and containers are mostly relegated to testing and dev.

    • This basically ignores modern orchestration platforms (Kubernetes), modern container tooling (Aqua security). Running a lot of services as containers in a modern orchestration platform purpose built for these deployment scenarios is trivial and easy. Host level security is answered by RedHat’s Atomic Host, and various host security control like with Aqua. Heavy traffic is not a problem unless you don’t understand the fundamentals of your hosting platform. Kubernetes works at planet scale with thousands of containers with zero issues if you understand the mechanics of running at scale (API getting battered and needs more Masters for example). Of course you can do minimal viable OS with VMs, and that is good, but its still not as efficient as containers, and efficiency is the speed of iteration in Enterprise software delivery that require promoting apps. From easy of dev to prod, Docker is it. And Kubernetes offers service-level HA and self-healing that doesn’t exist as a built in in the VM context. This allows for layered HA, from the node level, down to the service level. VMs are not bad, but in any application context where iteration is the key to success (most larger innovative software contexts) a microservice approach with the proper platform and tooling (Often Docker and Kubernetes) is going to be superior.

  • Pingback:

Comments are closed.