Ensuring a Smooth Kubernetes Deployment

It’s not hyperbole to say that Kubernetes is a game-changer for developers and DevOps teams. At full power, it has the capacity to impart significant efficiency and stability that has a tangible impact on a company’s bottom line. However, as is often said, “Nothing good ever came easy.” Many of us know that Kubernetes is inherently complex. Unfortunately, this complexity can lead to oversights that can cause major security issues. Thankfully, with the right procedures in place and a little additional work, these hurdles can be overcome.

The documentation for Kubernetes is fairly vast and usually crosses multiple roles within an organization, from operators and security engineers to developers. Knowing how much of the documentation is relevant to your job is difficult. Step one, which may sound very rudimentary, is to identify your need for the documentation and then focus primarily on that area of relevance. Using examples can also be a great way to learn; however, there are as many bad practices as much as there are good. So, understanding the framework of Kubernetes is still key!

Next, it’s important to understand the configuration of Kubernetes and what is or isn’t covered as part of the orchestration setup. As there are so many ways you can build Kubernetes, from cloud Kubernetes-as-a-service offerings and off-the-shelf products through to do-it-yourself setups, the configuration of Kubernetes and underlying infrastructure can differ drastically. There are some main principles: Make sure that your Kubernetes API endpoint isn’t public; that basic authentication or client-based certificates are not enabled; that role-based access controls are in place and constrained appropriately; that network policies are implemented; and that pod security policies are enabled with a set of sensible defaults that restrict how containers can run inline with security best practices. There’s not enough space here to review each setting; luckily, there is a wealth of resources online that you can consult to ascertain exactly which settings will be most appropriate for your business.

Beyond the configuration parameters of Kubernetes is how the clusters are intended to be used. If the cluster is shared between different teams, then the security approach may not be consistent. Adding in extra security precautions using taints and tolerations for node groups can be a good way to reduce risk. Alternatively, being able to give clusters to specific teams to isolate applications and reduce the blast radius is a much more preferred solution to the problem of a large multi-tenant cluster.

For those who are more focused on their application, having a basic understanding of the main components of Kubernetes—network policies, ingress, certificate management, deployment, config maps, secrets and service resources—is going to be key. If the administrator of Kubernetes has put in place pod security policies around containers, then some of the security constraints will be top-down and require modifications to your deployments to be inline with their policies. If that isn’t the case, then familiarizing yourself with what a good pod security policy is and why will help you approach your application security in a more considered way.

Beyond the running container itself is protecting the traffic communication and sensitive data your application is using, such as database passwords. Network policies are good at restricting traffic flow to applications and allow you to control the traffic to and from Kubernetes-based containers. Secrets natively don’t offer any encryption and are base64-encoded, so making sure the cluster administrator has enabled secret encryption will add that additional security layer when data is being stored in the etcd backend datastore.

Having the cluster administrator install something such as cert manager will allow an automated way for your applications to use TLS certificates and hence encrypt data between users and your application and application to application. If you are sharing the Kubernetes infrastructure with other teams or services, making sure to encrypt it will help protect the data in transit. There are also services such as Istio, but as that product does a lot more than just certificates, it can add more complexity than necessary.

Overall, the cluster administrators will need to pay attention to the implementation detail. Making sure that applications have suitable segregations not just with security but also operationally is a major decision when architecting cluster topologies. Preferably, having an automated, repeatable and consistent security mechanism for providing clusters will allow them to be at a team or project level as opposed to a multi-tenant level. This helps remediate risk by gaining repeatability but reducing the blast radius of a potential compromise or accidental operational mistake that could bring something such as ingress down for all services if shared.

Finally, it is a universal truth that a security system is only as good as how it is monitored and maintained. Constant vigilance is key. Being able to search your audit logs for things that are happening inside of Kubernetes as well as alerting on specific events will help notify you or your CSOC team to go investigate potential issues if they arrive.

As Kubernetes develops and the ecosystem matures around it, a lot of these problems will be made easier over time. However, until that happens it is absolutely essential to have a plan in place and do the necessary work to remedy them. It may be frustrating and a little tedious, but without it you could open your company up to a host of serious security issues that could be very costly. Just remember: Slow and steady often wins the race.

Jonathan Shanks

Jonathan Shanks is the CEO and co-Founder of Kubernetes delivery platform Appvia. He is a DevOps expert and entrepreneur with nearly two decades of experience in leading developers and engineers in scaling and delivering new solutions. At Appvia he leads a highly talented team of engineers and developers to deliver his vision of building a ground-breaking platform of tools that enables large organisations to quickly and securely create innovative new products and services. This is done by harnessing the power of Kubernetes to simplify their infrastructure, speed up delivery and reduce costs. Prior to joining Appvia, Jonathan was Head and Technical Lead at the Home Office. His role involved working with a team of engineers to deliver solutions across multiple digital projects. Jonathan spearheaded initiatives that revitalised aspects of the Home Office’s technical infrastructure – saving significant time and money. He created a team of talented engineers, delivered a hosting platform, ran workshops and managed stakeholders to make this happen. Jonathan gained his experience from his work as a Linux Architect at the NYSE Euronext and a Senior Linux Engineer at Betfair.

Jonathan Shanks has 2 posts and counting. See all posts by Jonathan Shanks