Google Adds Multi-Cluster Orchestration Service for Kubernetes
Google Cloud today at the KubeCon + CloudNativeCon Europe 2025 conference disclosed it is making available a public preview of an orchestration framework for managing multiple Kubernetes clusters.
Laura Lorenz, a software engineer for Google Cloud, told conference attendees the Multi-Cluster Orchestra (MCO) service will, in addition to making it simpler to optimally manage fleets of Kubernetes clusters, also provide more granular control over how IT infrastructure resources are dynamically allocated.
That ability to dynamically schedule workloads is especially critical in an era where the amount of capacity made available in cloud services is becoming more constrained, she noted.
MCO addresses that challenge by making it easier to manage workloads across multiple Kubernetes clusters as a single unit, including defining guardrails and policies and setting up automatic rollovers between clusters to automate disaster recovery across multiple cloud regions.
Additionally, there is a set of plug-ins that provide integrations with, for example, the Argo continuous delivery (CD) platform.
The overall goal is to enable a new age of dynamic provisioning, says Lorenz.
It’s not clear how aggressively IT organizations are looking to maximize the utilization of cloud infrastructure. Historically, any type of IT infrastructure is over-provisioned by application developers who are more concerned with ensuring availability than they are with necessary costs.
However, as IT teams embrace best FinOps practices to optimize cloud spending, they require tools and frameworks that make it simpler to optimize utilization of infrastructure, especially expensive graphical processing unit (GPU) resources. That challenge is that, in addition to providing IT teams with the tools needed to more granularly manage consumption of IT infrastructure resources, software engineering teams need more visibility into how much it actually costs to run their code.
There are, of course, multiple platforms for managing fleets of Kubernetes clusters, but Google Cloud is now making a case for a service that is directly integrated into its cloud platform. The degree to which IT teams opt to rely on that service in the age of multicloud computing remains to be seen.
The one certain thing is that more organizations than ever are looking for tools and platforms that enable them to more easily automate the management of Kubernetes clusters, at a time when deployments of Kubernetes clusters are starting to outpace the available DevOps skills currently required to manage them.
In fact, a recent Futurum Research survey finds 61% of respondents report they are using Kubernetes clusters to run some (41%) or most (19%) of their production workloads. The top workloads deployed on Kubernetes are AI/ML/Generative AI (56%) and data-intensive workloads such as analytics, tied at 56% each, closely followed by databases (54%), modernized legacy applications (48%) and microservices-based applications (45%).
Hopefully, as it becomes simpler to manage Kubernetes environments, the pace at which cloud-native applications are being built and deployed will only accelerate. In the meantime, IT teams should be exploring their management options as the number of clusters required to run all those applications continues to steadily increase.