Latest Release of Linkerd Service Mesh Includes Preview of Windows Support
Buoyant this week made available an update to the open-source Linkerd service mesh that, in addition to providing access to an experimental version that integrates Windows platforms, also makes it now possible to declaratively use a GitOps workflow to add Kubernetes clusters to the service mesh.
While the control plane for running Linkerd still runs on Kubernetes, IT teams will soon be able to extend the reach of Linkerd to Windows platforms alongside existing support provided for Linux virtual machines and Kubernetes pods.
At the same time, Linkerd 2.18 makes available additional metrics to capture Linkerd’s protocol detection behavior, in addition to now optionally reading the protocol for a port from the appProto field on Kubernetes Service objects, instead of making a default requirement. That latter capability addresses an issue that arises when clusters are not able to send that data because they are running other workloads at a high rate of utilization.
Other capabilities being added include an ability to propagate metadata dynamically across federated services, an ability to filter multicluster service labels and annotations to avoid sharing cluster-specific metadata, and proxy CPU usage can now be configured in terms of the number of available cores on the machine.
Finally, Buoyant,in future releases of Linkerd, will no longer install Gateway application programming interface (API) types by default to reduce friction that might otherwise occur with other APIs that might already be installed on a Kubernetes cluster.
Buoyant CEO William Morgan said this release of Linkerd, dubbed Battlescar, noted that as more IT teams deploy fleets of Kubernetes clusters, it’s become apparent that subtle changes, such as adding the ability to declaratively add clusters, needed to be made to the service mesh that connects them all. More organizations are now starting to manage Kubernetes clusters as cattle rather than a handful of pets, he noted.
In general, Buoyant continues to be committed to advancing a lighter-weight approach to deploying a service mesh that has always been designed to run natively on Kubernetes, said Morgan. A recent survey conducted by the Cloud Native Computing Foundation (CNCF) finds 42% of respondents are currently running a service mesh in a production environment, with another 11% running a service mesh in pilot and 15% planning to add one this year.
Most organizations don’t typically adopt a service mesh until the number of APIs they are trying to integrate across multiple Kubernetes clusters reaches a level of critical mass that requires a higher level of abstraction to manage. There has been, in recent years, a fierce debate about the best way to solve that issue using very different types of service meshes. Today, however, there is more general agreement that lighter-weight approaches that require less expertise to deploy and manage will be more widely adopted. The challenge now is determining which approach lends itself best to the way IT teams have already decided to manage Kubernetes environments that continue to become more distributed with each passing day.