Cloud-Native Infrastructure in the Age of AI: Open Control and Orchestration
At KubeCon Europe 2025, Alan Shimel sits down with CTO Shaun O’Meara and CEO Alex Freedland to explore where cloud-native infrastructure is headed now that AI workloads are moving from pilot projects to day-to-day operations.
Freedland reaches back to the OpenStack years to make his case. Back then, open governance let competitors share a common platform while keeping their own value on top. He sees the same need today. AI inference jobs appear across data centers, public clouds and compact edge boxes, yet GPUs are limited and data-sovereignty rules keep information close to home. A task might start in Frankfurt, shift to an on-prem cluster in Dublin, then finish at a roadside 5G node. Something neutral has to connect those dots, and Kubernetes, he argues, is still the most portable choice.
O’Meara agrees but stresses that orchestration alone is only half the solution. Teams also need a unified control plane that cuts down tool sprawl and enforces policy while jobs are running. He pictures a future filled with thousands of short-lived “ephemeral agents” that spin up for one task and disappear minutes later. Without a shared layer to cache models, route work to free GPUs and apply guardrails, cost and risk escalate quickly.
Their takeaway is practical: the next breakthrough in AI won’t be a flashier model but dependable, open-source plumbing that lets existing models run where they need to without the usual operational headache. In other words, reliable infrastructure, not hype, will decide who wins the coming wave of AI-driven applications.