CNCF Survey Surfaces Widespread Adoption of Kubernetes Clusters
A global survey of 628 IT professionals finds a full 82% work in organizations that are running Kubernetes clusters in production environments.
Conducted by Linux Foundation Research on behalf of the Cloud Native Computing Foundation (CNCF), the survey also finds that a quarter of respondents (25%) are using cloud-native technologies across all their application development and deployment workflows, while just over a third (34%) are mostly using them.
A full 98% of survey respondents are using some type of cloud-native technology, with Kubernetes being the most widely used, followed by Helm (81%), etcd (81%), Prometheus (77%, CoreDNS (76%) and containerd (74%).
Among CNCF projects that have yet to fully graduate, the most widely used are the Container Network Interface (52%), OpenTelemetry (49%), gRPC (44%) and Keycloak (42%), the survey finds.
Interestingly, the survey also finds that adoption of cloud-native technologies does not necessarily directly equate to usage of containers. The survey finds more than half of respondents (54%) are running mostly on containers in production environments, compared to 35% that are using them in a few production applications.
Just under half (47%) said the top challenge when it comes to deploying containers in production environments is cultural, followed by lack of training (36%), security (36%), continuous integration/continuous deployment (35%), monitoring (35%) and complexity (34%).
Hilary Carter, senior vice president of research for the Linux Foundation, said the survey results make it clear that change management issues remain a significant obstacle to adoption of containers and other cloud-native tools and platforms. Many of those challenges can be addressed by additional investments in training and certifications that focus, for example, on platform engineering, she added. In fact, one reason many organizations are adopting best platform engineering practices is to address that very issue, noted Carter.
Despite the challenges, however, Kubernetes clusters are also being used to deploy new classes of artificial intelligence (AI) workloads. A full two-thirds (66%) said their organization is hosting either all of their (23%) or some of their AI inference workloads (43%) on Kubernetes clusters. That level of adoption suggests that Kubernetes is gaining significant adoption among organizations that are moving to operationalize AI, said Carter. In fact, many of those early adopters of AI are using Kubernetes to deploy AI workloads both in the cloud and in on-premises IT environments that ensure greater isolation, she added.
There is, of course, a world of difference between an IT organization that has adopted one or two technologies that are being advanced under the auspices of the CNCF and one that makes use of multiple tools and platforms to build and deploy cloud-native applications that dynamically scale up and down. Many IT teams, for example, make use of containers and Kubernetes without necessarily being able to take full advantage of the capabilities such as auto-scaling that they enable.
Regardless of the level of maturity achieved, the one thing that is clear is that cloud-native tools and platforms are being pervasively adopted. The challenge and the opportunity now is to develop the skills and expertise needed to manage the tools and platforms at a higher level of scale in an era where AI is starting to dramatically increase the pace at which modern applications are being built and deployed.


