Report Surfaces Scope of Overspending in the Cloud

A report published today by CAST AI, a provider of a platform for optimizing cloud costs using machine learning algorithms for Kubernetes environments, finds that on average organizations are spending three times as much on cloud computing platforms as required.

Laurent Gil, chief product officer for CAST AI, said an analysis of 400 organizations using a free cluster analysis tool provided by the company finds most organizations are overprovisioning cloud resources to ensure application availability.

Nearly two-thirds of the wasted spending is attributable to CPUs and memory that are provisioned but not utilized as well as virtual machines running on expensive CPUs with memory footprints that are too large, the report finds. The remaining waste results from under-leveraging spot instances for containers that typically don’t run very long.

A full 98% of the IT organizations studied could generate substantial cost savings if they managed their clusters more efficiently, the report notes.

The challenge is it’s unlikely that IT teams will ever be able to manually optimize cloud spending, says Gil. Cloud providers typically offer more than 600 different instance types. Many of those resources are typically provisioned by developers who don’t always have the greatest appreciation for cost, he notes. It usually falls to a DevOps team to rightsize and optimize cloud costs across all the different instances employed across multiple cloud service providers, says Gil.

The issue is that in the absence of any tools to analyze the environment and then automatically optimize it to control costs, most IT teams will overprovision clusters to ensure applications have as much access to cloud infrastructure resources as possible. While IT organizations have always tended to make as much infrastructure as possible available to applications, the cost of that approach in the age of the cloud becomes prohibitive as the number of applications being deployed steadily increases.

Of course, the irony of this situation is that Kubernetes clusters in particular are designed to make it easier to scale infrastructure resources up and down as required. However, many IT teams don’t aggressively manage the consumption of cloud resources simply because historically there has not been much of a precedent for being able to do so, notes Gil.

CAST AI is making a case for a platform that makes it easier to achieve that goal via a platform that relies on machine learning algorithms to make sure requested and provisioned CPUs remain synchronized over time to minimize overprovisioning. In the wake of the COVID-19 pandemic, cloud costs have become a larger issue because organizations accelerate their transitions to the cloud as on-premises IT environments became physically less accessible to internal IT staff. Now, however, business and IT leaders are looking to better assess how those resources are being consumed as part of a larger effort to rein in IT costs that were allowed to balloon during the pandemic, Gil says.

It may be a while before IT teams are ready to completely trust AI platforms to automatically optimize IT environments, but it is certain that optimizing cloud costs will become progressively easier the more those algorithms become capable of understanding each unique IT environment where they are employed.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1681 posts and counting. See all posts by Mike Vizard