8 CNCF Tools to Run Kubernetes at the Edge and Bare Metal

Interestingly, we’re beginning to see interest in running Kubernetes at the edge and within bare metal instances. A study from Spectro Cloud finds 35% of respondents already deploy Kubernetes at the edge. Furthermore, 81% of respondents say there are “compelling” use cases for edge in their industry.

One reason to run Kubernetes on edge nodes is to enact real-time AI/ML processing closer to incoming data. These environments with high data manipulation requirements include areas like the factory floor, smart homes, vehicles, robotics, appliances, energy systems and other types of IoT systems. Production use at these edge nodes may soon rise as companies increase their investment into hybrid multi-cloud and various Kubernetes clusters and distribution modalities.

There are also many new tools to make this easier. For example, running Kubernetes at edge nodes with a utility like KubeEdge could reap many benefits, such as computing optimizations and avoiding high cloud data transfer fees. Moving processing to local machines might be done to increase reliability or to meet security and privacy compliance mandates.

Below, we’ll review a handful of tools that help you manage Kubernetes at the edge and within bare metal instances. All projects listed below are part of the Cloud-Native Computing Foundation, a hub of great cloud-native projects that are well-maintained and relatively stable.

KubeEdge

Kubernetes-native edge computing framework (project under CNCF)

Website | GitHub

KubeEdge, which became an incubating CNCF project in 2020, helps extend the cloud-native capabilities that operators have come to expect to the edge. The framework can be used to help create an edge cloud computing ecosystem, handling unique constraints such as network reliability and resource limitations on edge nodes. Using KubeEdge, you can deploy ML/AI applications at the edge or scale highly distributed edge architectures.

SuperEdge

Container management for edge computing

Website | GitHub

SuperEdge is another framework to extend Kubernetes to edge environments. Its core feature set includes components like edge-health which runs on end nodes to detect their health. There’s also lite-apiserver, a lightweight version of the Kubernetes API server that provides caching and authentication capabilities.

SuperEdge also uses a network tunnel to proxy requests between the cloud and the edge. By using these proxies, the project bills itself as a non-intrusive tool for configuring edge devices. SuperEdge was created by Tencent cloud and is now a CNCF sandbox project.

Akri

A Kubernetes resource interface for the edge

Website | GitHub

Some operators may want to run Kubernetes across edge nodes. However, at the edge of a network, you may be supporting many devices that are too small to run Kubernetes themselves. These devices often have intermittent availability and use unique communication protocols. For example, ONVIF is a standard used by many IP cameras.

The Akri open source project is designed to help better discover and manage small edge devices, also known as leaf devices. Akri is built over the native Kubernetes Device Plugins framework. According to the documentation, Akri excels at “handling the dynamic appearance and disappearance of leaf devices.” At the time of writing, Akri is a sandbox project with the CNCF.

OpenYurt

Extend K8s to the edge

Website | GitHub

OpenYurt is another tool to consider if you want to bring cloud-native infrastructure like Kubernetes to the edge. It’s quite an extensible framework to bring cloud-native capabilities such as elasticity, high availability, logging and DevOps into edge environments.

For example, OpenYurt provides self-healing capabilities, so if a node connection goes offline it can sync automatically once the connection is reinstated. It provides this and many more capabilities for edge service orchestration and leaf device management.

Many companies have used OpenYurt to extend native Kubernetes experience to edge environments across logistics, transportation, IoT, CDN, retail and manufacturing spaces. At the time of writing, OpenYurt is a sandbox project within the CNCF.

Metal3.io

Bare metal host provisioning for Kubernetes

Website | GitHub

Metal3.io is a tool for provisioning Kubernetes on bare metal hosts. It offers a Kubernetes API to manage provisioning details on bare metal; the provisioning stack itself is run on Kubernetes. Metal3.io uses the concept of a BareMetalHost to define the host’s desired state, bare metal health statuses and provisioning details such as settings related to deploying an image.

Tinkerbell

A workflow engine for provisioning bare metal

Website | GitHub

Another utility designed to help provision bare metal is Tinkerbell, the open source bare metal provisioning engine maintained by Equinix. It’s comprised of five key microservices: A network server, a metadata service, an operating system installation environment and a workflow engine. The workflow engine, called Tink, is the main provisioning engine that communicates using gRPC and offers a CLI for developers to work with.

Tinkerbell is generic enough to work with any operating system and provides declarative APIs to programmatically control automation. And since Tinkerbell is supported by Equinix Metal, you can pretty much guarantee that the project will be actively maintained well into the future. Tinkerbell is a CNCF sandbox project. For more information, you can check out the docs here.

Open ELB

Load balancer implementation for Kubernetes in bare metal, edge and virtualization

Website | GitHub

Cloud-based services typically use load balancers to connect applications. It may seem counterintuitive, but bare-metal environments can benefit from similar load-balancing features. However, these types of services are not available in bare-metal environments. OpenELB is intended to solve this issue as a load balancer designed explicitly for bare-metal, edge and virtualized instances.

OpenELB can be installed on Kubernetes, KubeSphere and K3s, according to the documentation. The project has an optional BGP router mode which, if enabled, provides advanced availability features. OpenELB, a sub-project of Kubesphere, is an open source project in sandbox mode with the CNCF.

Cluster API

Website | GitHub

A Kubernetes subproject to simplify cluster life cycle management

As organizations try to wrangle multiple Kubernetes clusters, certain tools have emerged to ease the process. Started by the Kubernetes Special Interest Group (SIG), Cluster API is one such project aiming to simplify working with multiple clusters. “Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading and operating multiple Kubernetes clusters,” describes the Cluster API eBook. While not solely relegated to edge or bare metal, CNCF contributors have explored how to use Cluster API to run K8s on bare metal, along with other supporting libraries.

On The Edge With Kubernetes

Kubernetes is a robust application manager. And now, environments outside of the cloud can reap the similar benefits of container orchestration, increased scalability and improved reliability. Although running K8s on edge or bare metal is typically regarded as tricky, with the use of these tooling abstractions, Kubernetes is becoming more versatile for nuanced circumstances.

Our roundup focused on free, open-source CNCF tools aiding Kubernetes on the edge and bare metal. Did we miss any? Are you using a different project to achieve similar goals? Feel free to comment below!

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and analyst. His beat is cloud technologies, specifically the web API economy. He began researching APIs as an Associate Editor at ProgrammableWeb, and since 2015 has been the Editor at Nordic APIs, a high-impact blog on API strategy for providers. He loves discovering new trends, interviewing key contributors, and researching new technology. He also gets out into the world to speak occasionally.

Bill Doerrfeld has 105 posts and counting. See all posts by Bill Doerrfeld