Grafana Labs Extends Observability Deeper Into Kubernetes Environments
Grafana Labs is leading an effort to update an open-source Kubernetes Monitoring Helm chart tool that will soon be able to send data to multiple destinations, in addition to providing built-in service integrations and a simplified configuration experience.
Company CTO Tom Wilkie said the overall goal is to simplify the management of Kubernetes environments at a time when organizations are making the platform their default for deploying new applications.
Grafana Labs, to support those efforts, has been steadily improving its support for Kubernetes by, for example, adding troubleshooting, cost control and energy consumption tracking tools to Kubernetes Monitoring in the Grafana Cloud service, he added.
Additionally, Grafana Labs has added a suite of workflows for contextual analysis of Kubernetes environment to Grafana Cloud and extended the reach of a Sift diagnostic assistant in Grafana Cloud, to enable IT teams to more easily identify specific issues that might be the root cause of an IT incident.
Grafana Labs also continues to invest in OpenTelemetry, a collection of open-source agents for collecting telemetry data that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF). The company has launched Grafana Alloy, a distribution of the OpenTelemetry collector for logs, traces and metrics.
Previously, the company also shared open-source code that allows users to translate Datadog metric formats into native OpenTelemetry format. And launched Explore Metrics, a no-code tool for browsing and analyzing Prometheus-compatible metrics without needing to write PromQL queries that now also supports the OpenTelemetry data.format.
At the same time, it’s also becoming simpler to collect data as more organizations adopt platforms that have extended Berkeley Packet Filtering (eBPF) capabilities built into them, noted Wilkie.
In general, organizations of all sizes are moving beyond simply monitoring predefined metrics to embrace observability. A Techstrong Research survey finds nearly half of respondents (48%) already work for organizations that practice observability regularly. A full 63% noted their organization will be making additional investments in observability over the next two years, with 21% describing those investments as significant.
In general, as IT environments become more complex it’s becoming increasingly difficult to manage highly distributed applications that tend to have lots of dependencies spread across multiple microservices. That approach tends to make those applications more resilient than a traditional monolithic application, but they are considerably more challenging to troubleshoot and maintain.
Of course, transitioning to an observability platform is a journey that starts first with being able to collect the telemetry data that DevOps teams need to analyze. Unfortunately, there are still large numbers of legacy applications that have never been instrumented but, hopefully, as more sources of telemetry data are added to an application environment the smoother that transition will become. There may even come a day, hopefully, when all the artificial intelligence (AI) embedded in those platforms identifies and resolves issues long before anyone ever realizes there is a problem.
In the meantime, however, humans will still be needed to make sense of all the telemetry data being collected from various sources, which seems to be increasing exponentially with each passing day.