Buoyant to Add MCP Support to Linkerd Service Mesh
Buoyant has announced it plans to add support for the Model Context Protocol (MCP) to the open source Linkerd service mesh for Kubernetes clusters.
Company CEO William Morgan said that capability will make it possible for IT teams to extend the governance and security controls they are applying to traffic generated by application programming interfaces (APIs) to network traffic generated by MCP servers and clients.
Originally developed by Anthropic, MCP provides a consistent interface for exposing data to AI applications and agents that is rapidly becoming a de facto standard. The challenge is the tools for applying governance and security controls to create the identity-based guardrails needed to prevent AI agents, for example, from accessing sensitive data are still few and far between. Most MCP traffic, as a result, is totally opaque, noted Morgan.
Without a means to cryptographically apply and enforce zero-trust guardrails via a service mesh such as Linkerd, the pace at which AI applications will be deployed in production environments will, because of compliance concerns, not be as broad as many organizations might prefer, added Morgan.
Agentic network traffic differs in behavior because AI agents create persistent sessions that span multiple interactions. Given the nature of those interactions, however, it’s not possible to consistently predict when agentic AI traffic might spike, an issue that becomes exponentially more challenging to manage as more AI agents are added to an application environment.
With the introduction of MCP support, Linkerd will provide the same visibility, access control, and traffic shaping capabilities to agentic AI traffic as it currently provides for API traffic, said Morgan. Those observability capabilities include resource, tool, and prompt usage metrics for tracking failure rates, latencies, and volume of data transmitted to identify anomalous behaviors. IT teams can then deny requests or terminate connections as needed, noted Morgan.
In general, it’s now more a question of how controls will be applied to AI agent traffic rather than if. Buoyant is making a case for an approach that extends the scope of a service mesh rather than requiring IT teams to absorb the cost of deploying and managing an additional platform.
Regardless of approach, the one thing that is certain is AI agents will be targeted by cybercriminals. Once a cybercriminal gains access to a set of credentials that provide access to an AI agent, they will be able to commandeer entire workflows. The challenge is finding a way to ensure that even when credentials are stolen, there are other controls and policies in place that limit the scope of any potential breach.
It’s not immediately apparent the degree to which organizations will be putting these controls in place before they operationalize AI agents. Hopefully, more organizations will have learned enough from the past to have proactively applied the appropriate controls. The one thing that is for certain is that it’s only a matter of time before there is a major cybersecurity breach involving AI agents that, at least in hindsight, was all too preventable.


