Containers Are Not VMs, and Other Misconceptions
The adoption rate of containers has been steadily growing as organizations begin to see the benefits container technology provides. This adoption also represents a new computing paradigm for many of the engineers responsible for running the IT infrastructure of these organizations. New concepts can often come with misconceptions.
In the case of containers, some of the common misunderstandings out there include:
- “Containers are just light virtual machines.”
- “Containers are not for legacy applications.”
- And one of my favorites, “Containers are not secure.”
Furthermore, there seems to be this idea that vocabulary in this area is unimportant. More on that later. But let’s dig into each of these misconceptions to better understand the reality.
Containers Are Just Light VMs
So, one of the most common misunderstandings is that containers are light virtual machines. It’s easy to see why this mistake occurs: for at least the last decade, virtual machines have been the primary IT resource for many organizations, and for engineers working in that realm, it’s an easy jump to compare VMs to containers.
VMs run on a hypervisor, and containers have a special runtime that allows them to consume host resources. That makes them the same, right? No. First of all, virtual machines have a full operating system such as Windows or a Linux distro. Containers by contrast are more akin to processes rather than a full machine. With containers, you are only taking the nuts and bolts that are absolutely necessary to make a piece of software run. This includes any runtimes or binaries, but not a full operating system.
Another differentiator is that containers are much more efficient at handling host resources than virtual machines. A VM needs to take a slice of the host’s resources for the entire time it’s running, whereas a container only uses what it needs when it needs it. There is also no need for a container host to supply resources for an operating system. Furthermore, the lightweight nature of containers makes them exponentially quicker and easier to scale.
Containers Are Not for Legacy Applications
Another topic of confusion that comes up in conversation is that containers are not for legacy applications. Containers are actually perfect for legacy applications and allow organizations to get rid of legacy hardware and any support costs that come with it (if support is even still available). Just by wrapping a legacy app in a single container—without refactoring into a microservice architecture—will allow that app to pass through any environment without the need to have legacy hardware. This makes life very easy for IT personnel, because instead of having to account for all the nuances of all the applications in the environment, a team only needs to provide an infrastructure capable of running containers.
Containers Are Not Secure
There’s this idea floating around out there that containers are not secure. Well, if you leave your car unlocked in a public parking lot it’s not secure, either.
The idea behind containers being insecure comes from the fact that containers run within a host operating system, which could make it possible to escalate privileges inside a container to then gain access to the host server. This specific attack was made possible with CVE-2019-5736, better known as the RunC vulnerability. In this scenario, a malicious actor could exploit a bug in the RunC binary used to run containers in the Docker solution and gain root privileges on the host machine.
As with anything else, certain steps need to be taken to ensure your container infrastructure is secure. First is keeping software up to date, and that includes container runtimes, be it Docker, ContainerD or something else. Enabling host protections such as SELinux can also be effective at securing container infrastructure. In fact, CVE-2019-5736 can be prevented with SELinux.
Furthermore, there are third-party security tools that make it possible to protect—and get further insight into—what’s running within your container infrastructure. One of these tools is a container image scanner. A container image scanner will look for malware and vulnerabilities within a container image, prior to that container actually running, to let admins know if there’s an issue. There are also tools to protect running containers from outside threats or even lateral movement from malicious containers running in the environment.
So, containers are not insecure, but just like anything else, you have to keep the doors locked and the windows rolled up to keep them from being abused.
Container Vocabulary
As mentioned above, another common misunderstanding has to do with vocabulary. There are a lot of terms to keep straight when it comes to computers and container technology just adds to it. Kubernetes, swarm, containers, Docker, LXC, host, node, worker, sidecar, etc., etc., etc. More often than not, when I’m talking to customers, I hear these terms used interchangeably. “Docker” or “Kubernetes” are often used to refer to containers or the container concept as a whole. This can be very confusing when trying to discuss projects with third-party companies or even other teams within your organization because these terms represent different parts of the container infrastructure. Kubernetes and Swarm are both used to orchestrate an environment that uses many applications broken into microservices and run in containers. Docker is a runtime used to run and launch containers, and it’s definitely not the only game in town. It’s important to gain an understanding of these terms to effectively communicate to the teams that will need to take advantage of the infrastructure.
Misunderstanding a relatively new concept is natural; however, it becomes an issue when those misunderstandings prevent IT departments from delivering quality infrastructure to their organizations. However, understanding the benefits of containers can add value to an IT infrastructure.