Tigera Extends Project Calico Reach to Secure AI Workloads
Tigera this week added an instance of its integrated container networking and security platform for Kubernetes environments that is specifically designed for artificial intelligence (AI) workloads.
Based on open source Project Calico software, Kubernetes has emerged as a de facto standard for deploying these applications. The challenge is that they introduce a series of additional application programming interfaces and data flows that need to be secured and governed. For example, when AI models are being trained, pods communicate laterally with other pods to exchange, analyze and refine data before writing the trained model back to storage. That pod-to-pod communication, by default, is unsecured and can be exploited by attackers to move laterally within the cluster to more sensitive assets.
Utpal Bhatt, chief marketing officer for Tigera, said the Calico ingress gateway enforces policies to ensure that trusted users and applications can access the model. A web application firewall (WAF), meanwhile, inspects incoming HTTP traffic to detect and block AI attack vectors, such as SQL injection and cache poisoning, that have been defined by the OWASP Foundation.
Tigera also enforces granular network policies, including staged policies for testing and governance, to enable zero-trust microsegmentation, noted Bhatt. A cluster mesh capability makes it possible to unify the management of those policies as AI workloads are distributed across multiple clusters to better isolate training, inference, and production workloads, he added.
Finally, Tigera also makes use of extended Berkeley Packet Filtering (eBPF) technologies to manage data flows in addition to providing detailed flow logs, DNS logging, and visual service graphs to help teams understand AI service interactions and identify misconfigurations.
It’s not clear to what degree AI workloads are being attacked but they typically have access to sensitive data that cybercriminals are trying to exfiltrate. The same cybercriminals are also attempting to poison the data LLMs are exposed to as part of an effort to corrupt the output that is generated.
Regardless of the reason why an AI model is being attacked, Calico enables IT teams to white list which Kubernetes pods are allowed to communicate with each other to thwart potential malicious activity, said Bhatt.
Unfortunately, despite all the hype surrounding AI the average cybersecurity team is once again playing catch up with an emerging technology. In fact, it might not be until there is some cataclysmic event before organizations appreciate the need to allocate additional resources to secure AI applications and agents both before and after they are deployed in production environments.
In the meantime, however, IT teams running Kubernetes clusters should, as more AI workloads are deployed, consider the cybersecurity implications. Cybercriminals are already targeting Kubernetes environments, which are only going to become even more tempting to compromise once it becomes more apparent they are now hosting high-value AI applications. The issue, of course, is that many of those Kubernetes clusters are not secure by default so there is, alas, still plenty of opportunity for mistakes to be made at a time when the number of individuals that have AI, Kubernetes and cybersecurity expertise remains scarce.