Securing Kubernetes and the Container Landscape

Looking for the best way to secure your Kubernetes and container environments? End-to-end is your best bet

Historically, the leading way to isolate and organize applications and their dependencies has been to place each application in its own virtual machine (VM). VMs make it possible to run multiple applications on the same physical hardware while keeping conflicts among software components and competition for hardware resources to a minimum. But virtual machines are bulky—typically gigabytes in size. They don’t really solve the problems of portability, software updates or continuous integration and continuous delivery. To resolve these issues, organizations have adopted Docker containers. This makes it possible to isolate applications into small, lightweight execution environments that share the operating system kernel. Typically measured in megabytes, containers use far fewer resources than virtual machines and start up almost immediately.

In the past, applications were deployed by installing the applications on a host using the operating system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries and life cycles with each other and with the host OS. It was possible to build immutable VM images to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable. These days, organizations deploy containers based on operating system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host—they have their own file systems, they can’t see each others’ processes and their computational resource usage can be bounded. They are easier to build than VMs and because they are decoupled from the underlying infrastructure and the host filesystem, they are portable across clouds and OS distributions.

Also, containers are small and fast; therefore, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. Similarly, immutable container images can be created at build/release time rather than at deployment time, since each application doesn’t need to be composed with the rest of the application stack or married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Containers are much more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process life cycles are managed by the infrastructure rather than hidden by a process supervisor inside the container.

Enter Kubernetes. This open source container orchestration system for automating deployment, scaling and management of containerized applications was designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a platform for automating deployment, scaling and operations of application containers across clusters of hosts. At its basic level, Kubernetes is a system for running and coordinating containerized applications across a cluster of machines. It is designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability and high availability. The need to move to the cloud for scalability and availability has spurred the need to use containerized development technologies, which in turn has witnessed the spectacular growth and adoption of Kubernetes as an enabling platform.

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications) and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments. As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates and switch traffic between different versions of your applications to test features or roll back problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power and reliability.

But such spectacular growth in innovation is outpacing current security measures and controls, rendering existing security solutions ineffective. Cloud-native apps require a new approach. There is an inherent lack of security knowledge by software developers when you consider the landscape of trying to secure all of these containers in the cloud. In fact, vulnerabilities can be introduced at any point of the development life cycle while unsecured or unreviewed code can be easily deployed into production, leaving applications and data at risk. At the end of the day, these containers are public-facing, enclosing all types of sensitive data and compliance with the privacy and the regulatory framework demands a portfolio of security tools that can help to manage compliance with DevOps. This new paradigm can be further formulated with DevSecOps, once again highlighting the need for converging security with the several stages of the software development and release life cycle. The best way to achieve this is to deploy an end-to-end Kubernetes security platform that monitors clusters for anomalies while securing the developed applications against all sorts of known and unknown attacks.

The rapid pace of application deployment and the highly automated runtime environment enabled by tools such as Kubernetes makes it critical to consider runtime Kubernetes security automation for all business-critical applications.

Eduardo Rocha

Eduardo Rocha is a Pre-Sales engineer at GlobalDots with a strong experience in the field of Internet traffic analysis and security. He successfully concluded a Doctoral degree in Portugal after which he moved to Germany where he worked on prestigious companies in the field of network analysis and security.

Eduardo Rocha has 1 posts and counting. See all posts by Eduardo Rocha