Unlike virtual machines, which provide stronger isolation, containers share a kernel, making it easier for attackers to move laterally between workloads. This creates a significant security risk—if one container is compromised, the entire infrastructure could be at risk.
Long discusses the need for a shift in how workload isolation is handled. Traditional approaches rely heavily on monitoring dashboards that alert security teams after an issue has already occurred. Others attempt to layer security tools on top of existing infrastructure, increasing complexity and performance overhead. Instead of continuing to patch vulnerabilities reactively, the industry needs a way to ensure true isolation at the container level from the start.
The conversation also explores the impact of this challenge on AI workloads. With over 65% of AI/ML workloads running on Kubernetes, the risks associated with shared kernel states are only growing. Security concerns around GPUs, TPUs, and DPUs further complicate the landscape, making it even more important to secure these environments without sacrificing performance.
Long also highlights how the lack of built-in workload isolation has forced DevSecOps teams into a reactive stance. Instead of spending time improving security practices and building better workflows, teams are stuck constantly monitoring, patching and mitigating threats that could have been prevented with stronger isolation from the start.
With an evolving threat landscape and increasing infrastructure complexity, this discussion highlights the urgent need to rethink how workloads are secured. The industry has long accepted certain security trade-offs, but as Long points out, it’s time to question whether those compromises are still necessary.