Solo.io Extends Networking Reach Beyond Kubernetes

Solo.io today launched a preview of a unified control plane for dynamically managing network services that are accessed via diverse types of applications running in container environments, serverless computing frameworks and virtual machine platforms.

Brian Gracely, head of product management for Solo.io, says Gloo Fabric extends the capabilities of the company’s existing Gloo Platform for managing networking services using the open source Istio service mesh running on a Kubernetes cluster.

Gloo Fabric extends the application programming interface (API) management framework that Solo.io previously built on top of Istio to other application environments. That approach will enable IT teams to finally centralize the management of networking and security services by applying zero-trust policies via a single control plane, says Gracely.

Those services can either be managed in isolation from one another across both public and private cloud computing environments or logically grouped together as IT organizations see fit, he adds.

As more cloud-native applications are deployed in production environments, the need for application connectivity between highly distributed microservices is becoming more important. That challenge is that those microservices, in addition to running across multiple Kubernetes clusters, are also invoking resources that reside on serverless computing frameworks and legacy virtual machines. Gloo Fabric makes it possible to extend the application connectivity framework Solio.io created using Istio into those environments.

It’s not clear just what impact application connectivity will have on how networking has been historically managed, but as a layer of abstraction for connecting networking services becomes more programmatically accessible, the need to rely on dedicated network administrators to provision those services starts to decline. There will continue to be a need to deploy routers and switches to create the physical network underlay, but the Layer 3 through Layer 7 services delivered via that network infrastructure can be dynamically configured as a natural extension of a DevOps workflow, noted Gracely.

In some cases, networking services may continue to be managed by a team or networking specialists or a DevOps team that is extending its reach into the realm of network operations (NetOps) in a way that is GitOps-driven, because networking services are now programmatically managed along infrastructure-as-code in a way that reduces the number of errors that might otherwise be made, says Gracely.

In effect, the service mesh provides an overlay through which networking—and, by extension, security services—can be programmatically managed using rules to define how applications communicate. Instead of having to wait for a network administrator to perform those tasks, the application development teams are afforded more control.

It may be a while before networking is subsumed into DevOps workflows given the current cultural divide that often exists between these teams. However, as it becomes more apparent that service meshes make it feasible to provision network and security service in a more agile way, the need to converge DevOps and NetOps will become more pressing. The day when IT teams needed to wait days, sometimes weeks, for a NetOps team to provision services for an application are coming to an end.

In the meantime, DevOps teams would be well-advised to begin experimenting with service meshes like Istio to reduce a current level of networking friction that slows the pace of application deployment.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1615 posts and counting. See all posts by Mike Vizard