Kubermatic Updates Control Plane for Kubernetes

Kubermatic announced today it updated its open source platform to give IT administrators more control over how Kubernetes clusters are provisioned.

Version 2.16 of Kubermatic Kubernetes Platform (KKP) makes available a set of preset management functions either via a graphical user interface (GUI) or using infrastructure-as-code tools that invoke the application programming interfaces (APIs) exposed by KKP.

As part of that expanded accessibility, Kubermatic adds support for open source Open Policy Agent (OPA) software that enables IT teams to also apply compliance policies using code in a Kubernetes environment.

Finally, KKP 2.16 also adds support for Arm processors and a technology preview of forthcoming integration with Kubeflow, an open source framework for deploying machine learning workloads on Kubernetes clusters.

Sascha Haase, vice president for edge computing at Kubermatic, says the eventual goal is to automate the provisioning of an entire AI environment, including Jupyter notebooks, across a fleet of Kubernetes clusters.

As Kubernetes clusters become more widely deployed, it is apparent DevOps teams, with a lot of programming expertise, will need to be able to manage Kubernetes environments alongside IT administrators that typically use graphical user interfaces (GUIs) through which to manage IT environments.

KKP, at its core, is a master control plane through which to centrally manage fleets of Kubernetes clusters running on multiple platforms. Life cycle management of Kubernetes clusters, including provisioning, scaling, updating and cleanup, can all be automated.

As more organizations begin to deploy Kubernetes at scale, the need for a consistent set of processes for provisioning and updating clusters becomes more acute. Fortunately, there are now multiple ways of achieving that goal using control planes that provide a higher level of management abstraction above the APIs exposed by Kubernetes.

Thus far, there’s no consensus on the number of Kubernetes clusters that constitutes a fleet. Many Kubernetes clusters are initially launched by individual application development teams. Many IT teams don’t discover how many applications have been deployed on a Kubernetes cluster until those application development teams come looking for support from a central IT function. Of course, the delineation of duties between application developers and IT administrators is not always clearly defined, which is why organizations need a control plane that exposes both a GUI and a set of APIs. A developer might also choose to invoke an API one day and use a GUI tool the next – there are no absolutes.

Regardless of whether those Kubernetes cluster are deployed in the cloud, in a local data center or at the network edge, the total cost of operating a Kubernetes environment will soon have to hold up under scrutiny. Organizations may value the agility that Kubernetes platforms enable when deploying microservices-based applications, but most are also struggling to contain costs in the wake of the economic downturn due to the COVID-19 pandemic.

Most IT teams are now being tasked with fulfilling a tall order; find a way to reduce IT costs without adversely impacting the rate at which applications can be built, deployed and updated. More often than not, that means relying on higher levels of automation.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1621 posts and counting. See all posts by Mike Vizard