StormForge Automates Right-Sizing of Container Applications

StormForge today added an ability to automatically right-size container environments to its platform for optimizing the consumption of Kubernetes resources.

Yasmin Rajabi, vice president of product management for StormForge, says version 2.0 of StormForge Optimize Live will enable IT organizations to continuously right-size containers to reduce costs and optimize application performance.

IT teams today are asked to configure settings to achieve that goal, but given the scale at which cloud-native applications run in an IT environment, it’s become all but impossible for them to rely on manual processes to achieve that goal, notes Rajabi.
As a result, organizations are wasting a significant amount of resources at a time when many of them are more sensitive to IT costs than ever, she added.

The latest version of StormForge Optimize Live enables IT teams to right-size applications by first deploying an instance of Helm to install a controller. Machine learning algorithms will then provide recommendations for optimizing deployments via a dashboard that aggregates resource impacts, cost savings and reliability improvements, says Rajabi.

Optimize Live also automatically detects the presence of a Horizontal Pod Autoscaler (HPA) and recommends target CPU utilization to enable bi-dimensional autoscaling, she notes.

IT teams can then export YAML files directly to their continuous integration/continuous delivery (CI/CD) platform or to Optimize Live for autoscaling Kubernetes environments to deploy the recommendations automatically.

Overprovisioning of Kubernetes clusters is commonplace because, in the name of application resiliency, most developers routinely allow their applications to consume as much of the available infrastructure resources as possible. In the wake of the economic downturn, however, more IT teams are now under pressure to reduce costs by increasing infrastructure utilization rates.

StormForge Optimize Live is purpose-built specifically for Kubernetes clusters to achieve that goal without increasing the risk of out-of-memory errors and CPU throttling that can adversely impact application performance.

One of the factors that has constrained the adoption of Kubernetes is the overall complexity of the platform. It can be challenging to implement and maintain fleets of Kubernetes clusters simply because of the number of settings and options that must be mastered. However, as tools that take advantage of machine learning algorithms and other forms of artificial intelligence (AI) become more widely available, the Kubernetes platform should become more accessible to IT administrators of varying skill levels versus always requiring a DevOps team to programmatically manage infrastructure alongside the applications that run on it.

In the meantime, finance teams have taken note of the total cost of running cloud-native applications. In theory, of course, Kubernetes is designed to dynamically scale infrastructure up and down as required, but many developers tend to assume Kubernetes environments function much like any virtual machine, so they provision as much memory as possible and don’t take advantage of lower-cost cloud services to run workloads for a specified amount of time. The challenge and the opportunity now is to employ machine learning algorithms to make up for developers that, in terms of their ability to consistently optimize Kubernetes environments, are all too human.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1623 posts and counting. See all posts by Mike Vizard