StormForge Extends Kubernetes Autoscaling Capability
StormForge today announced it has expanded its ability to vertically or horizontally scale Kubernetes pods automatically using machine learning algorithms.
Rich Bentley, product marketing manager for StormForge, says the latest version of StormForge Optimize Live provides IT teams with a bi-dimensional autoscaling capability to both right-size pods and horizontally set utilization rates without any contention. Previously, IT teams had to opt to either vertically or horizontally scale Kubernetes clusters but could not do both simultaneously, he noted.
StormForge Optimize Live was created to enable IT teams to minimize the consumption of infrastructure resources as part of an effort to use capacity planning to reduce costs. The challenge IT teams encounter in Kubernetes environments is the dynamic nature of the application environment, which makes it practically impossible to manually achieve that goal. Machine learning algorithms enable IT teams to automatically manage Kubernetes settings based on the attributes of the workloads running, says Bentley.
Kubernetes natively offers a horizontal pod autoscaler (HPA) and vertical pod autoscaler (VPA) option. It’s not been practical to use both capabilities together without extensive customization efforts, notes Bentley. HPA also requires IT teams to manually set a target utilization that determines when to add or remove replicas.
As a result, overprovisioning of Kubernetes clusters is commonplace. In the name of application resiliency, most developers routinely allow their applications to consume as much of the available infrastructure resources as possible. In the wake of the economic downturn, however, more IT teams are now under pressure to reduce costs by increasing infrastructure utilization rates, notes Bentley.
StormForge Optimize Live is purpose-built specifically for Kubernetes clusters to reduce the risk of out-of-memory errors and CPU throttling that can adversely impact application performance, Bentley says.
One of the factors that has constrained the adoption of Kubernetes is the overall complexity of the platform. It can be challenging to implement and maintain fleets of Kubernetes clusters simply because of the number of settings and options that must be mastered. However, as tools that take advantage of machine learning algorithms and other forms of artificial intelligence (AI) become more widely available, the Kubernetes platform should become more accessible to the average IT administrator. That’s critical, because there are not enough DevOps engineers with the required expertise to programmatically manage Kubernetes clusters at scale. The only way to make up for that shortfall is to provide IT administrators with tools to automate management tasks.
It’s not clear how much the changing economic conditions will drive IT teams to rediscover the art of capacity planning. A few decades ago, it was common for IT teams to employ capacity management tools. However, as industry-standard x86 servers became viewed as a commodity, the amount of focus on capacity planning waned. Now, however, as IT teams reckon with cloud service provider bills based on the amount of infrastructure consumed, there is renewed interest in capacity planning. The only difference today is that instead of relying on the expertise of an IT administrator to manage capacity planning, the entire process is much more automated.