Container Isolation is not Safety

Container technology has seen a sharp rise in adoption in many organizations’ IT workloads in recent years. The unprecedented ability to support fast, resilient and scalable software is now well acknowledged. However, containers also represent a new standardized attack surface for malicious actors. Understanding the container security landscape is essential for safeguarding operations throughout the software development life cycle (SDLC).

What Problem Does a Container Solve?

Containers offer a standardized format in which to package software and isolate their runtime from the rest of the host operating system. Essentially, they represent a lighter-weight alternative to traditional virtual machines because they do not require embedding a complete OS to execute an application. While lighter, they also abstract the entire application as a uniform workload, implementing a new separation of concerns.

Containers are useful because these properties open up a whole new level of automation for operational teams. Indeed, when correctly set up, a container infrastructure leads not only to higher reliability and scalability but also to lower operational and licensing costs.

Most of the time, a container can be killed and redeployed at any time without severe consequences, thanks to the immutable nature of the deployed application. As an analogy, these are often referred to as “cattle,” since they are deprived of any identity (“state”), making them straightforward to replace in case of failure and to scale workloads up or down in response to external factors, for example.

Of course, this is not mandatory, as some kind of state is always needed. Containers that are going to encapsulate it, like a containerized database, for instance, will require close attention and special management rules.

The layer of abstraction containers introduce is a huge advantage in terms of infrastructure: Operational teams have complete control over resource allocation and don’t need to take into account what kind of application is going to be run, what the stack looks like or what kind of architecture has been chosen on the software side. With virtualization, development and deployment processes were much more tightly coupled because the hardware itself was virtualized, meaning it had to be similar in both environments.

Adapt Often

Containers are very much tied to DevOps and Agile philosophies. Using them as ephemeral blocks allows, for example, to build, test and package fresh source code in an ad hoc environment that will be destroyed after each stage is completed.

This is known as the continuous integration and continuous delivery (CI/CD) pipeline. It is a very versatile mechanism that can be thought of as the backbone of any modern agile software development organization.

Nowadays, it is not uncommon to see hundreds of pipelines running every day for a single company, where many small teams each work on one single component of the final product. In some cases, this can be as fine-grained as a single SaaS endpoint, for example. To run this kind of company-level integration without teams fighting for resources in the next release (or, more pragmatically, to ship their last-minute hotfix) would be unimaginable without the flexibility of containers.

Another example of how containers encourage small iterative improvements and innovation is with blue-green deployments. Once regression tests are validated, you can automatically deploy a new feature in front of real production data to check that performance is still good before switching to it.

Developers advocated for frequent integration of source code before (think extreme programming), but it is fair to say that DevOps has definitely been a game-changer in accelerating time-to-market and making continuous integration a reality. Businesses could sense that embracing this framework would allow them to significantly increase the velocity at which their products could be updated and new features shipped and eventually convert it into a competitive advantage.

Certainly, containers are the piece of technology bridging the gap between development and operations: Developers are accountable for creating programs that can run in them, while ops are at the controls for managing and monitoring the workloads. What’s new is that, in fact, both are now the same person; who could be in a better position to monitor an application than the person who coded it?

Containers are not Security Devices

Among container vendors, Docker is by far the most widely used, to the point that it has become synonymous with containers. Adopted by engineers around the world, it provides an easy-to-use interface, completely abstracting away the underlying base Linux mechanisms supporting it.

One could argue that one negative side effect of its massive rate of adoption has been insufficient education about what Docker is—and is not—for the people who will have to use this tool professionally on a daily basis.

In fact, the word “container” is often misunderstood, as many developers tend to associate the concept of isolation with a false sense of security, believing this technology to be inherently safe. One key here is to understand that containers don’t have any security features by default. On the contrary, their security completely depends on the supporting infrastructure (OS and platform), their embedded software components and their runtime configuration.

The assumption that containers will add a layer of security to the applications running inside is a widespread misconception, especially among people who have little familiarity with operational security mitigation. It might even result in containers being exposed to the wild without proper configuration, allowing an attacker to gain access and escalate privileges inside the host.

A recent study by Aqua Security found that “50% of new misconfigured Docker instances are attacked by botnets within 56 minutes of being set up.” While “the majority of attacks were focused on cryptomining […], 40% of them also involved backdoors to gain access to the victim’s environment and networks. Backdoors were enabled by dropping dedicated malware or creating new users with root privileges and SSH keys for remote access.”

To make it worse, GitGuardian recently conducted a study on secrets sprawl in Docker images. Of the 2,000 public images recently pushed to Docker Hub, 7% contained at least one secret.

Hunting for Secrets in Containers

For these reasons, most security experts are advocating for better education on the security challenges posed by cloud-native technologies like containers as part of a DevSecOps migration. They are also pushing to accelerate threat mapping to gain a clearer vision of the security landscape surrounding containers.

Advancing Toward a Common Threat Map

Driven by growing demand from the security community, MITRE released an ATT&CK framework for the container matrix in April 2021 which covers both orchestration-level and container-level adversary behaviors. The rationale behind this addition to their well-known catalog was the need for a holistic view of how containers are being targeted in enterprise environments.

The matrix was built from open source intelligence gathered by the Center for Threat-Informed Defense (CITD) as well as from security professionals’ experiences on the frontline of container threat defense and vulnerability research. As described by its authors, it is the first step toward a collaborative map of the real-world attack surface exposed by containers in the broader context of cloud security.

The CITD’s data is concordant with Aqua’s statistics: “Most behaviors ultimately lead to cryptomining activities, even when they involve accessing secrets such as cloud credentials,” according to the report. Nonetheless, the CITD notes, “using containers for more ‘traditional’ purposes such as exfiltration or collection of sensitive data is publicly underreported.”

As the new building blocks of modern IT infrastructure, containers come with their own set of challenges. Orchestration and a high level of automation are new layers being actively targeted to gain initial access and hide malicious activity. A common language spoken by developers, ops and security teams is therefore becoming more and more a necessity.

Container Security 101

The following are some of the most common security issues found in containers:

  • Containers are built from images, whose provenance should be subject to strict policies and which should be automatically and regularly scanned.
  • A privileged container is essentially allowed to do anything the host can do. Implementing the principle of least-privilege at every stage by correctly configuring CI/CD pipelines is essential, as supply chain attacks are becoming more and more common. Finer-grained capabilities are great at reducing attack surfaces.
  • Inter-container communication restrictions and host network isolation can be quite challenging to manage, but containers living on the same network (as in most default configurations) make lateral movement easy.
  • Resources like CPUs, memory and the number of processes started or run in parallel in a container can also be limited. This helps to prevent attacks like fork bombs and remote code injection.
  • Containers are easy to use as vectors to infiltrate the underlying host. Isolation of the host network should be extended as far as possible; isolated user and process namespaces and read-only filesystems should be used whenever possible.

As more and more IT teams are adopting DevOps, containers are becoming an essential part of their toolbox. But the rapid adoption rate is accompanied by a surge in security issues. Indeed, containers and related technologies like orchestration also represent a new vector for attackers, with some common misconfigurations easily exploitable. In response, the industry is moving forward to gather more intelligence on in-the-wild adversary behaviors. However, education about what containers are not—security devices—is also needed to bring down the myth that application isolation equals safety.

Thomas Segura

Thomas has worked as both an analyst and a software engineer consultant for various large French companies. His passion for tech and open source led him to join GitGuardian as a technical content writer. He now focuses on clarifying the transformative changes that cybersecurity and software are undergoing.

Thomas Segura has 1 posts and counting. See all posts by Thomas Segura