Service Mesh is Not a Product, it’s a Design Pattern

Istio is having a moment. Already on the rise, search interest in the open source service mesh soared after last August’s release of Istio version 1.0.

For now, Istio is the king of service mesh, and for many, it defines what a service mesh is: a product for managing traffic between microservices.

That, I would argue, is a problem. Because while microservices are an important use case, they’re not the only one. Service mesh isn’t simply a product for wrangling microservices—it’s a design pattern for delivering services in the cloud.

As others have observed, most of Istio’s functionality—traffic management, security, observability—is similar to what you’d find in some application delivery controllers. Except for features specifically designed for microservices, like service discovery and east-west traffic management, Istio is for intents and purposes a next-gen software ADC. It only works with containerized apps, but architecturally speaking, there’s no reason a service mesh like it—or incorporating it—couldn’t work with traditional apps as well.

To understand why that is and why it matters, it helps to review why Istio is so well-matched to its task.

At the dawn of the cloud age, apps were apps. Maybe they were virtualized, maybe not. Either way, they were the same apps you used to run on your own servers, except now you were running them in the cloud.

And that was … not great, exactly, but pretty good. The apps ran fine, but they brought along a lot of baggage. They expected to run on traditional servers and on a traditional OS, not some airy realm of storage and compute. They weren’t designed to deploy anywhere or scale up and down in tiny, efficient gradations—presumably part of the reason you moved to the cloud in the first place. You could get by—and, in fact, the vast majority of enterprises are doing it right now—but you could also see that there must be a better solution.

It didn’t take long for people to realize that what did run neatly in the cloud were applications broken into microservices, especially when containerized. Pop a microservice in a Docker container and you could replicate it, ship it to your private cloud or to AWS or Azure. The microservice didn’t care. It just kept doing its thing.

Except, like any app, microservices couldn’t do anything useful on their own—not in production, anyway. They needed external services, such as traffic management, security and observability (sound familiar?) to function under real-world conditions.

That’s where ADCs come in—or should, anyway. Traditional ADCs are physical or virtual appliances. They’re discrete entities that need care and feeding. Pets rather than cattle, as the saying goes. That’s fine if you only need to wrangle a few of them, but imagine using individual ADCs to deliver services to a galaxy of microservices spinning up, spinning down, moving from cloud to cloud to data center to cloud. You have to manage and configure hundreds of different ADCs, each responsible for different services in different places. It’s possible, given enough resources, but it’s madness.

The great innovation of service mesh is that it separates the management functions (or “control plane”) of the ADCs from the data plane, the code that actually delivers load balancing, security and so forth. More, it centralizes the control plane so that the user can can manage services for any application running anywhere on the network from one interface. Instead of racks of appliances (or their software equivalents), proxies near the application (Istio’s are called “Envoys”) do the grunt work under the direction of a central controller—a kind of “big brain” that manages and analyzes the work of the proxies.

The benefits of this model for microservices are obvious. There is no practical way to manage and maintain hundreds of appliances for thousands of services that are constantly winking in and out. But crucially, these same benefits apply to traditional applications as well. Whether you’re running containers in production or not (and most enterprises aren’t), modern IT’s arrow of complication curves exponentially upward. Hybrid cloud, public clouds, multi-cloud: all mean more things to manage, in more places—more settings to tweak and dials to watch, more apps to provision, scale, move and protect. The average enterprise maintains operations in three to five clouds, and 81% have a strategy for using multiple public clouds.

So never mind microservices. Delivering services to traditional apps across multiple clouds is hard enough. Seen in that light, service mesh isn’t so much a solution for microservices as a solution for complexity—a much larger proposition, given that most workloads are still monolithic applications on virtual machines and bare metal servers, and will be for the foreseeable future. It’s quite possible that what might at first glance appear a niche application for microservices could represent the future of the ADC industry.

Ranga Rajagopalan

Over the last 15 years prior to co-founding Avi Networks, Ranga has been an architect and developer of several high-performance distributed operating systems as well as networking and storage data center products. Before his current role as CTO, he was the Senior Director at Cisco’s Data Center business unit, responsible for platform software on the Nexus 7000 product line. Joining Cisco through the acquisition of Andiamo where he was one of the lead architects for the SAN-OS operating system, Ranga began his career at SGI as an IRIX kernel engineer for the Origin series of ccNUMA servers. Beginning his journey with a Master of Science degree in electrical engineering from Stanford University and a Bachelor of Engineering in EEE from BITS, Pilani, India, he now has several patents in networking and storage.

Ranga Rajagopalan has 1 posts and counting. See all posts by Ranga Rajagopalan