Policy Engines: Ensuring Configuration Security in Kubernetes

Given their ease of use, flexibility and level of automation they deliver, policy engines have become a required component in Kubernetes clusters

Kubernetes’ adoption in the enterprise has clearly grown beyond the early adopter phase. Basic operations such as provisioning clusters and deploying applications to these clusters have become easier with the advent of managed Kubernetes services such as EKS, GKE, AKS, etc. and application deployment tools such as Helm, Kustomize and other CD tools. Kubernetes has matured to the point where users no longer need to worry about how to get Kubernetes clusters spun up and deployed, but they now need to consider how to maintain and scale these clusters over the long term. In other words, users need to identify and address Day 2 operations challenges such as monitoring, logging, security, backup/DR and compliance.

As the number of clusters and the number of applications running in Kubernetes increase, the amount of configuration explodes. The power and flexibility of Kubernetes come from its declarative configuration but that also leads to an explosion of fine-grained configurations that need to be managed. In Kubernetes, applications and all related configurations are defined using YAML. As a result, it becomes practically impossible to ensure the validity and the integrity of the configuration and to eliminate possible misconfigurations. It is also no secret that some of the major outages and security breaches have been caused by misconfigurations. So a key challenge is establishing how to ensure configuration security without impacting developer agility and productivity.

Over the past couple of years, an elegant solution has emerged to address the configuration security challenge. Policy engines have emerged to provide a scalable way to enforce configuration standards and best practices, ensuring consistent configurations across multiple clusters. Kubernetes is designed to be pluggable and extensible. One way to extend Kubernetes is by plugging in an admission controller so that all API requests that are made to the Kubernetes server can be inspected, validated or mutated.

A policy engine typically registers as an admissions controller so that it can inspect and block insecure and invalid configurations in real-time to prevent any issues. Policy engines are also usually extensible so that custom rules can be defined to validate configuration and to identify and/or reject any configuration that violates corporate compliance standards. Some examples of common rules are to detect/block configurations that allow root access or to block configurations that require host network access.

More sophisticated policy engines can go beyond just validating configuration by allowing the configuration to be mutated or updated on the fly. For example, such a policy could be used to add labels or annotations to Kubernetes configurations based on predetermined criteria. Another example of using mutation policies is to ensure certain workloads get deployed on specific nodes by adding node selectors or node affinity rules.

Besides mutating existing configurations, some policy engines can also generate new configurations enabling the next level of automation. A common example is to enable the secure sharing of Kubernetes clusters between teams or applications. By default, Kubernetes is insecure when it comes to allowing access to cluster resources. Any user who has access to the cluster can end up consuming all the cluster resources or potentially even accessing other applications running in the cluster. While Kubernetes has built-in primitives such as network policies, resource quotas and more that can enable secure sharing of the cluster, someone needs to automate the creation of these resources whenever a new namespace is created and prevent the users from updating or deleting these resources. A policy engine that is capable of generating Kubernetes resources can automate this task very easily. A policy can be written to create resource quotas, network policies and other necessary resources whenever a new resource is created.

Policy engines also can be used for fine-grained access control. Kubernetes already has sophisticated and granular access control, but it is not sufficient for certain scenarios. For example, sometimes it may be required to block the deletion of certain objects that are tagged with a label for all users other than cluster admins. This can be achieved easily with a policy.

As the use of Kubernetes grows, organizations are increasingly looking to address the Day 2 operations challenges, and ensuring compliance, governance and security controls are extremely high on their list. Policy engines have emerged as an extremely flexible and scalable solution to automate the tedious task of enforcing configuration security.

Open source policy engines such as Open Policy Agent and Kyverno can easily be deployed to existing Kubernetes clusters and policies can be configured to enforce best practices and detect any violations. Given the ease of use, flexibility and level of automation it delivers, a policy engine has very quickly become a required component in any production-grade cluster.

This article is part of a series of articles from sponsors of KubeCon + CloudNativeCon 2020 North America

Ritesh Patel

Ritesh Patel is a co-founder and VP of Product at Nirmata, a cloud-native application management platform built on Kubernetes. Ritesh has 15 years of experience in enterprise software development and team leadership. Prior to Nirmata, Ritesh was responsible for private cloud strategy and business development at Brocade where he led various OpenStack-related initiatives and created a partner ecosystem. Ritesh has also held key technical positions at Trapeze Networks, Nortel, and Motorola. Ritesh holds an MBA from UC Berkeley and an MS from Michigan State University.

Ritesh Patel has 1 posts and counting. See all posts by Ritesh Patel