Buoyant Extends Linkerd Service Mesh Across Multiple Kubernetes Clusters
Buoyant today released an update to the open source Linkerd service mesh that enables pods running on different Kubernetes clusters to establish direct TCP connections across a flat network.
In addition, version 2.14 of Linkerd now fully supports the Gateway application programming interface (API) added to Kubernetes to provide a standard mechanism for configuring resources such as classes of HTTP requests.
Buoyant CEO William Morgan said as more complex cloud-native applications are built, organizations will need a simpler way to connect them across multiple Kubernetes clusters. The latest Linkerd update addresses that issue by, for example, providing a “gateway-less” mode for cross-cluster communication that reduces latency, improves security by preserving workload identity in mTLS calls and reduces the overall amount of traffic moving through gateways, he noted.
As more organizations appreciate the need for a service mesh to manage traffic within and between Kubernetes clusters, Buoyant claims the number of stable Kubernetes clusters running Linkerd has doubled, with organizations such as Adidas, Microsoft, Plaid and DB Schenker all deploying it in the last 18 months.
In general, Buoyant is recommending organizations upgrade to version 1.28 of Kubernetes because it provides the ability to orchestrate container sidecars that are used to run Linkerd, said Morgan.
Unfortunately, upgrading Kubernetes clusters remains challenging for a lot of IT organizations. The concern is that applications will break because one or more dependencies will no longer exist. Upgrades of any platform are still roughly perceived to be the IT equivalent of heart surgery, noted Morgan.
However, as more organizations embrace platform engineering to centrally manage DevOps workflows, the number of clusters running the latest version of Kubernetes should steadily increase, he added.
Regardless of the version or distribution of Kubernetes, however, it’s clear more cloud-native applications are being deployed on clusters running in production environments. As the number of those clusters increases, a host of connectivity issues, such as ensuring high availability, start to arise, said Morgan. Those issues create more demand for a service mesh to manage the connectivity requirements, he added.
In addition, service meshes also provide IT teams with an abstraction layer that makes it simpler to manage networking and security across multiple Kubernetes clusters.
Buoyant is making a case for a lighter-weight service mesh for Kubernetes clusters that is simpler to implement than rival approaches. In some cases, organizations will find they may have multiple service meshes in place implemented by different teams. In other cases, organizations may decide an API gateway meets their current level of communication requirements. One way or another, however, managing connectivity within and across Kubernetes clusters only becomes more challenging. As such, most organizations would be well-advised to at least start becoming familiar with service mesh tools and platforms.
After all, it’s never a good idea to learn how something works after the need for it has become critical.