CloudBolt Acquires StormForge to Optimize Kubernetes Infrastructure
CloudBolt, a provider of an IT automation platform, this week revealed it has acquired StormForge, a provider of a platform that leverages machine learning algorithms to optimize the consumption of Kubernetes infrastructure resources.
At the same time, StormForge, which will continue to operate as an arm of CloudBolt, announced at the KubeCon + CloudNativeCon Europe 2025 conference that it has added an ability to now optimize individual nodes on a Kubernetes cluster to ensure workloads in a way that ensures, for example, there is enough memory available to run them on a specific node.
Additionally, StormForge is now making generally available an ability to surface Java heap size recommendations to improve application performance, along with an ability to access its software via the Amazon Web Services (AWS) Marketplace to enable IT teams to license StormForge software based on actual usage.
Finally, StormForge COO Yasmin Rajabi will now take on the role of chief strategy officer at CloudBolt. The goal is to facilitate an integration that will make it simpler to rightsize IT infrastructure environments in an era where utilization rates have remained consistently low, she said.
CloudBolt CTO Kyle Campos added that artificial intelligence (AI) technologies developed by StormForge will also be integrated into the core CloudBolt platform to enable organizations to embed best FinOps practices in a way that can be easily automated. Those capabilities will further extend the scope of the AI technologies that CloudBolt already provides to automate the management of FinOps workflows, he added.
The only way to achieve that goal is to ensure that visibility into how much it costs to deploy applications is embedded as a service into the platforms that IT teams use to manage IT environments, noted Campos. Most IT teams, unfortunately, have little to no visibility into, for example, how much cloud infrastructure is being consumed by Kubernetes clusters, he added.
The more Kubernetes clusters deployed, the more pressing that issue becomes, noted Campos. In fact, a recent Futurum Research survey suggests many organizations are already struggling with that issue. A full 61% of respondents are using Kubernetes clusters to run some (41%) or most (19%) of their production workloads.
Most Kubernetes environments today are managed in isolation by a dedicated team of software engineers, but as more cloud-native applications are deployed on Kubernetes clusters, an inevitable tipping point in which legacy monolithic applications account for less than half of the applications deployed. As IT organizations achieve that milestone, they will increasingly need to come to terms with centralizing the management of Kubernetes and those legacy platforms. In fact, the rise of platform engineering is arguably a first step toward achieving that larger goal.
Of course, it’s also only a matter of time before the number of Kubernetes clusters exceeds the ability of DevOps engineers to manage. Eventually, the processes relied on to manage Kubernetes clusters will become more automated with the help of AI. At the same time, graphical tools for Kubernetes should help make the platform more accessible to traditional IT administrators.
Regardless of approach, reining in the cost of Kubernetes clusters is going to be a much higher priority for all concerned.