How Kubernetes and Containers Will Protect — and be Protected by — AI
As companies expand their use of predictive and generative AI in building new applications and services, the security of the underlying data becomes increasingly important — and challenging. Container-focused technologies — including Kubernetes for container orchestration — will be increasingly important tools in protecting the integrity, privacy and sovereignty of the data used to fuel organizations’ AI-based initiatives. At the same time, AI will boost organizations’ ability to optimize the Kubernetes platform for running and protecting containerized workloads.
Linux containers and the Kubernetes container orchestration platform provide a robust and resilient environment for deploying AI-enabled applications and services. Containers support AI/ML workloads by making code consistently reproducible and portable across diverse environments, and Kubernetes provides scalability, high availability and automated service discovery.
Of course, AI ups the data risk ante, especially when it comes to workloads that include sensitive and/or high-value data, including the data used in large language models that will soon become (if they aren’t already) competitive differentiators.
An Ecosystem of Protection
Several open-source projects are emerging that provide options for implementing the additional layers of defense in depth that will be required for AI-enabled applications and the data models that support them.
This is in keeping with the history of containers and Kubernetes, both of which have inspired robust collaboration among open-source developers. “The State of Kubernetes Security Report” notes that organizations rely on many open-source security tools to protect their cloud-native applications. Overall, the report states, organizations use an average of 2.1 security-related open-source tools within their Kubernetes environments.
For example:
- 35% of respondents simplify policy management with Open Policy Agent, a toolset and framework for unified policies across cloud-native stacks
- 31% check Kubernetes deployment security against the CIS Kubernetes Benchmark using Kube-bench
- 31% ensure applications adhere to best practices with KubeLinter, a static analysis tool for Kubernetes YAML files and Helm charts
- 28% identify security issues in Kubernetes clusters and cloud-native environments using Kube-hunter, a security testing and scanning tool
All of these projects offer (and will continue to offer) protection for container-based apps in general, but it’s also worth investigating newer projects that use AI and/or can be leveraged specifically to protect container-based AI applications on Kubernetes.
For example, we’ve seen significant investment in simplifying integrity verification for containerized applications in open-source projects such as Sigstore — for signing artifacts, including images and configuration files, and verifying artifact signatures — and Spiffe/SPIRE for Kubernetes workload identity. We’ve also seen continued investment in maintaining the security of network communications between services within a cluster (east-west) as well as communication between on-cluster and off-cluster services with projects like AdminNetworkPolicy and Gateway API.
Use AI to Secure AI
AI also can be leveraged to help mitigate the risk that AI itself adds. To that end, complementary technologies and projects are emerging that go beyond protecting data at rest and in motion to protecting data in use.
For example, the CNCF Confidential Containers project, otherwise known as CoCo, brings confidential computing to Kubernetes by using hardware-trusted execution environments (TEEs) to verify the integrity of the environment and to further isolate running containers. CoCo will be especially important in hybrid cloud environments because it enables organizations to deploy workloads on public and private infrastructure while reducing the risk of workload and data compromise.
Another CNCF project uses AI to more efficiently protect and streamline AI (and other) workloads: K8sGPT scans Kubernetes clusters and diagnoses and triages issues in simple English. K8sGPT, which was accepted into the CNCF late last year at the sandbox level, will be a boon to SREs, whose jobs are being made both easier and more difficult by AI.
Likewise, the open-source KoPylot tool uses AI to analyze Kubernetes configurations and resource descriptions to identify not only security risks but also ways in which the platform can be optimized for running AI models. Kubernetes assistants like KoPylot may ultimately help organizations bridge the skills gap by providing automated security recommendations and other alerts in a way that helps teams prioritize and mitigate issues.
You can expect to see more AI-focused projects swirling around the containers/Kubernetes ecosystem. You can also expect to see a focus on standing up new AI protections and tools within Kubernetes itself.
On that note, it’s important to utilize time-tested security best practices when building cloud-native AI-based apps. For example, “The 2024 State of Kubernetes Security Report” recommends, making use of Kubernetes-native security native controls in combination with declarative data to protect your container workloads; extending security across application life cycles, and adopting tools that support DevSecOps practices.
Indeed, both existing and new security controls will be needed to harness the power of AI, in a way that is sustainable, transparent, scalable and — above all — secure.