Datadog Adds Automatic Kubernetes Scaling to Observability Platform
Datadog today at its Dash 2024 conference added a set of autoscaling capabilities for Kubernetes clusters that can be invoked via its observability platform.
Available in beta, Datadog Kubernetes Autoscaling capabilities will make it simpler for IT teams to optimize the consumption of infrastructure resources.
Mehdi Sif, director of product marketing at Datadog, said that while Kubernetes clusters have always provided an ability to dynamically scale resources up and down as required, invoking that capability today requires significant expertise. The Datadog observability platform now makes it simpler to dynamically scale Kubernetes clusters up and down at a higher level of abstraction that is more accessible to IT teams typically made up of DevOps engineers and IT administrators, he noted.
Despite the ability to dynamically control the consumption of IT infrastructure, most cloud-native applications are as overprovisioned as legacy monolithic applications. Largely out of habit, many application developers routinely overprovision IT infrastructure to ensure application availability with little regard to costs. A recent analysis conducted by Datadog found 83% of container costs were associated with idle resources that have been provisioned to ensure applications have enough resources to handle peak demand even though Kubernetes is designed to automatically scale up as needed.
The Datadog Kubernetes Autoscaling capability, in addition to monitoring consumption of IT infrastructure resources, enables IT teams to automatically rightsize Kubernetes resources on demand, said Sif. That capability makes it easier to identify workloads and clusters with a high number of idle resources and either implement a one-time fix or rely on the Datadog platform to automatically scale the workload as needed.
It’s not clear to what degree organizations are now embracing best FinOps practices to better control cloud costs but in an era where developers have been allowed to directly provision IT infrastructure, there is a lot of wastage. As the number of applications deployed in the cloud steadily increases, organizations are trying to maximize investments in a way that enables them to run more applications without unnecessarily increasing cloud infrastructure costs.
The challenge, as always, is finding the simplest way to achieve that goal within the context of the existing workflow used to manage IT operations versus requiring IT teams to deploy yet another tool that they would then need to integrate.
In the meantime, the pace at which cloud-native applications are being deployed on Kubernetes clusters is only going to increase. As such, optimizing the usage of the underlying IT infrastructure needed to run those applications has become a pressing concern in an economic climate that, at best, largely remains uncertain.