Mirantis Adds Consulting Team to Help Deploy MCP Servers on Kubernetes Clusters
Mirantis today added a set of services for organizations looking to deploy artificial intelligence (AI) workloads accessing Model Context Protocol servers deployed on Kubernetes clusters.
Randy Bias, vice president of open source strategy and technology at Mirantis, said the MCP AdaptiveOps service is designed to provide access to Mirantis engineering teams that will help train internal IT teams on how best to deploy multiple MCP servers as needed.
Originally developed by Anthropic, MCP is emerging as a de facto standard for making data available to AI applications and agents. The challenge is that while there is a lot of demand for AI applications, few IT teams have much expertise deploying these workloads.
The services teams from Mirantis will enable those IT teams to close that skills gap by working alongside experts to deploy applications and associated MCP servers using a control plane developed by Mirantis, said Bias.
In addition to supporting greenfield deployments of any MCP server, Mirantis engineers will also audit existing MCP servers to provide recommendations for better aligning them with external and enterprise ecosystems. Mirantis also will provide ongoing operational support, including service level agreements, to ensure MCP servers are stable and secure, added Bias.
Despite management challenges, Kubernetes is rapidly becoming a de facto standard for deploying AI workloads that typically are deployed as a set of containers. The Kubernetes orchestration engine then makes it simpler to scale those workloads up and down as required, which is a crucial capability given how much data is typically being processed. In fact, with the rise of AI, the number of stateful applications running on Kubernetes clusters is expected to substantially increase in the months and years ahead.
In the meantime, however, the number of IT professionals that understand how to manage AI workloads running on Kubernetes infrastructure remains limited. As such, reliance on services that, in addition to managing these workloads also train internal IT teams on how to eventually assume responsibility for them, are likely to prove crucial. Otherwise, the pace at which organizations will be able to operationalize AI will be severely constrained.
It’s not clear to what degree AI will transform IT infrastructure environments as, for example, more graphics processor units (GPUs) and other types of AI accelerators are added to Kubernetes clusters. The one thing that is certain is IT infrastructure environments will become more challenging to manage as the number and classes of processors employed to run AI applications increase. The best way to rise to that challenge, instead of designing a comprehensive IT platform, is to focus more on individual use cases that provide IT teams with an opportunity to gain some hands-on expertise, said Bias.
In the meantime, as part of any effort to operationalize AI the number of MCP servers that expose data to AI agents and other applications will substantially increase in Kubernetes environments. The only issue that needs to be resolved now is how to securely deploy and manage them at a level of scale that is likely to soon be much higher than most anyone would have anticipated a few short months ago.