Cloud-Native Architecture’s Next Test: Holding Up Under Agentic AI
Enterprises are juggling three migrations at once: lifting workloads off legacy virtualization stacks, modernizing what’s already running on Kubernetes, and figuring out where agentic AI fits inside both. Each of those shifts has its own runtime, its own traffic patterns and its own failure modes, and the seams between them are where most of today’s cloud-native pain actually lives.
Alan Shimel, broadcasting from SUSECON in Prague, sits down with Traefik Labs CEO Sudeep Goswami to dig into how those layers are starting to converge. Goswami argues that AI-generated code is landing in production faster than any previous wave of software, which means the runtime, not the pipeline, is becoming the real control point. Without dynamic governance at that layer, autonomous agents end up with more reach than anyone intended.
They get into the mechanics of what “brakes on the flywheel” look like in practice — policy enforcement that travels with the workload, identity-aware routing for agent-to-service calls, and observability that treats AI traffic as a first-class citizen rather than just another HTTP stream. The takeaway is that ingress, API gateway and service mesh decisions made today directly shape how safely agents can be deployed tomorrow.
The discussion also covers the new integrations Traefik is rolling out across SUSE Rancher, RKE2 and the SUSE AI Factory, and what that combination signals about the direction of the broader cloud-native stack. Goswami’s view is that architecture choices made in the next 12 to 18 months — around runtime governance, portability and open standards — will determine which platforms can actually hold up under a decade of AI-driven change.


