Buoyant Update to Linkerd Service Mesh Makes Applications More Resilient
Buoyant this week added an ability to combine multiple services being accessed via the Linkerd service mesh across multiple Kubernetes clusters into a single logical service.
In addition, version 2.17 of Linkerd now also provides rate-limiting capabilities, increased visibility and control over Kubernetes egress traffic and support for distributed tracing enabled by OpenTelemetry agent software.
Buoyant CEO William Morgan said that the ability to federate multiple services increases resiliency by ensuring a replica of any service is always available in the event a Kubernetes cluster is offline or application programming interface (API) calls need to be rerouted to ensure performance levels are maintained.
That federated service capability is an extension of the laid balancing capabilities that the Linkerd service mesh already provides, he noted.
The rate-limiting capabilities added to Linkerd 2.17 further ensure reliability by ensuring that any one server is not overloaded, added Morgan.
As more organizations deploy multiple Kubernetes clusters the need for greater visibility and control over traffic is becoming a more pressing concern. The latest update to Linkerd helps address that issue by providing, at the application level, full visibility into all egress traffic. IT teams can view the source, destination and traffic levels of all traffic leaving your cluster, including the hostnames and associated configurations, HTTP paths, or gRPC methods. IT teams can also invoke egress security policies to allow or disallow any of that traffic at a granular level. That capability makes it simpler to apply zero-trust IT policies across a Kubernetes environment without having to change any application code, said Morgan.
Now that there is a critical mass of Kubernetes clusters being deployed the number of organizations that need to find better ways to manage all the APIs being invoked to access the microservices that make up a cloud-native application has increased. While there is no shortage of service mesh options, Buoyant continues to make a case for a Linkerd service mesh that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF), which is simpler to deploy and manage than those alternatives.
It’s not clear yet who within IT organizations is assuming responsibility for deploying and managing a service mesh. However, more of the organizations that are deploying cloud-native applications successfully have set up platform engineering teams to manage the underlying IT infrastructure, noted Morgan.
As a methodology for managing DevOps workflows at scale, platform engineering continues to gain traction. A Techstrong Research survey finds that 61% of respondents work for organizations that are already applying platform engineering principles across all or some element of their IT operations. Improving developer productivity (59%), the need for standardization of configurations (58%), reducing costs (51%), decreasing the increased complexity of modern applications (49%) and improving security (48%) are the primary drivers of platform engineering adoption, the survey finds.
Regardless of which team deploys a service mesh, the one certain thing is the more distributed a cloud-native application environment becomes, the more likely it is that a service mesh will become a standard element of the stack of infrastructure needed to deploy and manage those applications.