vCluster Embraces Karpenter for Dynamic Scaling of Virtual Kubernetes Nodes
vCluster Labs today added an ability to automatically scale nodes running on a virtual Kubernetes cluster.
Company CEO Lukas Gentele said the Auto Nodes capability added to the vCluster platform are enabled by adding support for Karpenter, an open source automation framework for scaling Kubernetes clusters that was originally developed by Amazon Web Services (AWS).
That capability makes it possible to monitor pods, including ones that have not been scheduled, inside the virtual cluster, dynamically provision new nodes that have specific constraints applied and automatically removes unused nodes once workloads terminate.
IT teams can now use Terraform or OpenTofu infrastructure-as-code (IaC) tools to declaratively define infrastructure or use any similar tools they might have developed on their own to automatically scale nodes as needed within a virtual Kubernetes cluster.
Additionally, Auto Nodes also provides native support for frameworks such as NVIDIA Base Command Manager (BCM) for managing artificial intelligence (AI) workloads running on graphical processor units (GPUs) and KubeVirt, an open source project that makes it possible to encapsulate kernel-based virtual machines (KVMs) in a container to enable them to be deployed on a Kubernetes cluster.
As a result, it becomes simpler for IT teams to now scale, for example, private nodes running in isolation on a virtual Kubernetes cluster.
IT teams can also now more easily shift workloads between cloud providers and on-premises IT environments as pricing, availability or specific policy constraints are enforced without changing application code or the way a virtual cluster is configured, noted Gentele.
At a time when more organizations than ever are concerned about the total cost of IT, vCluster software provides a means for rapidly spinning up virtual clusters using a shared set of infrastructure in a way that ultimately improves IT infrastructure utilization at a time when many more organizations are sensitive to the total cost of IT, added Gentele.
Those virtual clusters can then be managed by DevOps or platform engineering teams using command line interfaces (CLIs), custom resource definitions (CRDs), Helm charts, infrastructure-as-code (IaC) tools and YAML files, or by IT administrators using a graphical user interface. That latter approach expands the available pool of IT talent capable of managing Kubernetes clusters. Many Kubernetes clusters are initially deployed by DevOps engineers who have experience working with YAML files, but as the number of clusters increases, the need to enable IT administrators who have less programming expertise has become more pressing.
Virtual Kubernetes clusters today are used most widely in pre-production environments to reduce the total number of physical Kubernetes servers an organization needs to deploy. However, the number of instances of virtual clusters being used in production environments is growing as more organizations look to reduce the total cost of IT, and more organizations find themselves managing fleets of physical Kubernetes clusters that can quickly become cost prohibitive.
In the meantime, it’s not clear just how many virtual Kubernetes clusters may have been spun up in recent years, but given the potential number of virtual clusters that could run on a single physical cluster, it may not be too long before those virtual clusters far outnumber existing physical clusters, with the number of virtual nodes running to also soon exponentially increase.