When Good Images Go Bad: Ensuring Container Security

With container deployments becoming increasingly strategic for enterprise success, existing software development and security methodologies will need to better support not only developing but also running and managing applications made possible by containerization.

When developers create a container image for their application, they start by choosing what’s known as a “base image” from a library of potential images, which may be public or internal to an organization. This base image will contain services and libraries providing core functionality for the running container application. The developer then layers in the application with its unique requirements. The resulting image is then tested and, assuming all the tests pass, pushed to a registry from which it can be deployed. For the purposes of this article, we’ll assume that any image pushed to a registry was a “good” image at the time it was pushed.

Why Good Container Images Go Bad

The reality is that these “good” container images go “bad” every day, and often without warning. Once a new CVE is disclosed against any component in a container image, all containers deployed using this image are at an increased level of risk of compromise.

With Linux and Docker as the backbone of most container deployments, open source components are virtually guaranteed to be powering a containerized application. Development teams face a choice of consuming any given open source component either directly from public repositories or caching a version of a selected component sourced from a public repository within an internally managed repository. The decisions regarding which repository to source a component from, and whether to cache the version, create a trust model that directly impacts the security of the applications.

Securing open source software requires different processes than proprietary or commercial software. With commercial software, the vendor provides guidance on deployment, notifies customers about security vulnerabilities and is the source for both security disclosures and any associated patches. With open source components, changes in the application life cycle probably aren’t being communicated to those using the component unless they have proactively engaged with the community supporting that component.

Not all components can be trusted consistently. Every file or image that comes from a repository could have a vulnerability at the point of download, even if they were free from known vulnerabilities at the time the component was published. In essence, each component needs to be treated with the same level of caution organizations assign to any random download from the internet. Unlike proprietary code, which comes from a defined vendor, open source components can come from a variety of independent sources, or forks. Because each fork represents an independent distribution channel for the component, patches obtained from any fork other than the one you’re explicitly using may result in unexpected changes in application behavior.

Toward Better Container Security

Containerized applications shouldn’t be patched using traditional patch management processes. Rather, patches to container images should be made by rebuilding the image and then systematically replacing any running containers with the updated image. This paradigm shift often requires enterprises to reassess their patching processes and continuous monitoring requirements.

Managing container infrastructure security in a production environment becomes even more challenging due to the scale of deployments. Can you trust that all containers in your Kubernetes or OpenShift cluster are performing the tasks you expect, and that none have been compromised?

Operations teams can better prepare themselves to minimize containerized application risk by answering the following questions:

  • Where does the base image used in the container come from?
  • What is the health of that base image, and when was it last assessed?
  • When the image was built, did the build use any cached components?
  • If the container was created internally, what is the trust model for the build environment?
  • Is there any way a foreign, or unapproved, container can start in your environment?
  • Is there any way someone can modify the contents of a running container?
  • Who has the rights to modify container images?
  • What happens if base image registry or image tag goes away and I need to rebuild a container image in order to patch it?
  • When a security disclosure happens, what’s the process to determine impact?
  • How are images being updated and deployed in the face of new security disclosures?

Every organization must ensure their approach to container security scales to the requirements of their cluster. It can take only one vulnerable container to facilitate a breach, which is why organizations need visibility into every deployed container image.

To put this in perspective, in the first half of 2018 we’ve averaged close to 50 new CVE disclosures every day. Traditional security models such as periodic scans simply weren’t designed to keep up with such a high volume of disclosures. Organizations need to adopt container-specific vulnerability management tools and processes to minimize the potential for compromise.

Since containerized applications are largely open source in nature, successful container security management solutions need to  build upon proven open source management paradigm, including:

  • Creating an inventory of open source components, including their origin. Since you can’t patch what you don’t know is present, this is a critical component of success.
  • Mapping each component to known vulnerability disclosures and engaging with the components’ community to ensure that new disclosures are quickly identified.
  • Create an open source governance policy that includes security awareness in addition to IP compliance.
  • Invest in tooling to alert on open source governance issues as they arise. The chosen tooling will need to scale to both quantity of applications, as well as their frequency of change.

Containers have streamlined the software delivery process. But containers also present unique security challenges. A container image is fundamentally just a software package that needs to be secured. Containers, like other delivery processes for modern applications, need processes in place to ensure any vulnerabilities in open source components are identified, triaged and patched as soon as operationally possible.

Tim Mackey

Tim Mackey works within the Synopsys Software Integrity Group as a technology evangelist. He joined Synopsys as part of the Black Duck Software acquisition where he worked to bring integrated security scanning technology to Red Hat OpenShift and the Kubernetes container orchestration platforms. Prior to joining Black Duck, Tim worked at Citrix as the community manager for XenServer and was part of the Citrix Open Source Business Office. Being a technology evangelist allows Tim to apply his skills in distributed systems engineering, mission critical engineering, performance monitoring and large-scale data center operations to customer problems. He takes the lessons learned from those activities and delivers talks globally at well-known events such as RSA, OSCON, Open Source Summit, KubeCon, Interop, CA World, Container World, DevSecCon, DevOps Days and the IoT Summit. Tim is also an O’Reilly Media published author.

Tim Mackey has 2 posts and counting. See all posts by Tim Mackey