4 Service Proxy Projects From CNCF

Standardizing communication between various apps and servers is paramount in our world of connected software. To manage traffic in a scalable way, software systems typically deploy a service proxy as a backend utility that sits between software components. A service proxy intercepts traffic and applies logic before forwarding the request to another application or server.

Service proxies are a generic concept and might apply various features such as gathering metrics, logging data, performing routing, encrypting data, caching content or applying mTLS. A service proxy is commonly used to control ingress and egress—it can filter incoming requests, act on data or transform data. You could also use a service proxy to apply security policies.

Nowadays, service proxies are foundational components of inter-service communication within large cloud-native software infrastructure. For example, popular service meshes utilize the Envoy proxy, which sits alongside containerized applications to handle network communication.

Below, we’ll dive into some proxies that fall under the Cloud Native Computing Foundation (CNCF) umbrella. These toolsets include proxies and load balancers that provide ingress features. Most are Kubernetes-native. Let’s explore them below.


Cloud-native high-performance service proxy

Website | GitHub

As a microservices ecosystem scales, you must also build a scalable layer for network communication. Initially developed by Lyft, Envoy is a cloud-native service proxy designed to solve the issues of networking and observability within large microservices networks. Envoy is a cloud-native C++ distributed proxy and load balancer intended to run alongside each individual application. Therefore, it can be used for single applications or as a service bus for interconnected networks.

Envoy is a very popular service proxy used by Airbnb, AWS, Grubhub, Netflix and many other cloud-native organizations. Envoy has been famously used in the Istio service mesh to enable a universal data plane. Envoy ships with APIs to programmatically configure it and comes with advanced load balancing and observability features. The Envoy filter chain is also extensible, allowing for additional logic to be inserted. Envoy is open source and a graduated project hosted by CNCF.


A Kubernetes ingress controller using Envoy proxy.

Website | GitHub

Contour is an open source K8s ingress controller that adds some spice on top of Envoy—it’s designed to smooth out the traffic ingress management side of things by acting as a control plane. Contour can be used to dynamically configure your Envoy implementations and delegate ingress across multiple teams. Contour’s ‘IngressRoute’ can also be used to issue blue-green deployments to enable iterative software releases and testing.

Contour can be thought of as an alternative to service meshes such as Linkerd, Istio or Kuma. Contour, however, addresses a more refined use case, namely as a load balancer explicitly for north-south traffic only. Therefore, it can keep a lean footprint. Contour supports both the Kubernetes Ingress API and the HTTPProxy API, a Kubernetes custom resource. At the time of writing, Contour is an incubating project with the CNCF.


Open source Layer 7 load balancer

Website | GitHub

Originally developed at Baidu, BFE’s name is derived from the Baidu Front End. It’s an open source Layer 7 load balancer that can be used for routing, load balancing, security and observability. Written in Golang, BFE offers a flexible framework for new features and plugins. It supports a lot of protocols, such as HTTP, HTTPS, SPDY, HTTP2, gRPC, WebSocket and TLS. At the time of writing, BFE is a CNCF sandbox project.

One interesting thing about BFE is its domain-specific syntax for structuring expressions. These human-readable commands make it easy to understand and write rules. For example:

 // return true if the request Host is "www.bfe-networks.com" or "bfe-networks.com"


A bare-metal load balancer alternative

Website | GitHub

As we’ve seen above, cloud-based services typically use load balancers to connect applications. It may seem counterintuitive, but bare-metal environments are adopting cloud-native infrastructure like Kubernetes, too, and these environments can benefit from similar load balancing features. However, these types of services are not available in bare-metal environments. OpenELB is intended to solve this issue as a load balancer designed explicitly for bare-metal, edge and virtualized instances.

OpenELB can be installed on Kubernetes, KubeSphere, and K3s, according to the documentation. The project has an optional BGP router mode which, if enabled, provides advanced availability features. OpenELB, a sub-project of Kubesphere, is an open source project in sandbox mode with the CNCF. Developers can participate or request a feature here.

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and analyst. His beat is cloud technologies, specifically the web API economy. He began researching APIs as an Associate Editor at ProgrammableWeb, and since 2015 has been the Editor at Nordic APIs, a high-impact blog on API strategy for providers. He loves discovering new trends, interviewing key contributors, and researching new technology. He also gets out into the world to speak occasionally.

Bill Doerrfeld has 105 posts and counting. See all posts by Bill Doerrfeld