Mirantis Launches Open Source Control Plane Project for Kubernetes Clusters
Mirantis today added a distributed container management environment (DCME) to its portfolio that provides IT teams with a control plane they can deploy anywhere to manage Kubernetes clusters.
Randy Bias, vice president of open source strategy and technology for Mirantis, said the open source k0rdent project makes use of the Cluster application programming interface (API) in Kubernetes to provide IT teams with a platform for managing clusters based on any distribution. Testing has been completed on AWS EC2, AWS Elastic Kubernetes Service (EKS), Azure Compute, Azure Kubernetes Service (AKS), vSphere and OpenStack.
Previously, IT teams would have needed to commit to a managed service to access similar capabilities, noted Bias.
That capability will make it simpler for DevOps or platform engineering teams to build and deploy an internal developer platform (IDP) through which the development of cloud-native applications can be centralized, he added.
Cloud service providers have been using control planes to manage IT environments at scale. The k0rdent now makes it possible for internal IT teams to centrally manage Kubernetes environments similarly. At its core, a control plane provides a framework that determines how data is managed, routed and processed.
It’s not clear to what degree internal IT teams are now directly managing fleets of Kubernetes clusters versus relying on a managed service to perform those tasks on their behalf. There are, however, a substantial number of organizations that either would simply prefer to not incur the additional cost of a managed service or may have regulatory compliance issues that prevent them from using a managed service.
The challenge is that as Kubernetes clusters become more distributed to, for example, the network edge, the need to centrally manage those clusters at scale becomes a more pressing concern. One of the reasons many organizations initially embrace platform engineering as a methodology for managing DevOps workflows at scale is the need to manage large numbers of Kubernetes clusters. The overall goal is to provide a better experience for application developers who would prefer to write code versus learning how to manage YAML files.
That’s crucial because increasingly Kubernetes is becoming the default platform for deploying new applications that are mostly developed using containers. The more difficult it is to build and deploy those applications the slower the rate at which software moves into production environments becomes.
Each IT team will need to determine to what degree to manage Kubernetes themselves. In some instances, IT teams within the same organization might come to very different conclusions on how best to proceed. The one certain thing is that Kubernetes has steadily become more accessible to more IT teams, many of which will soon be able to take advantage of artificial intelligence (AI) to reduce the level of expertise required to manage multiple Kubernetes clusters. In the long term, it should become more feasible for traditional IT administrators who generally don’t have a lot of programming expertise to manage Kubernetes clusters.
That all requires, of course, gaining access to a control plane that enables those Kubernetes clusters to be managed at scale in the first place.