Survey Surfaces Myriad Kubernetes Networking Challenges
A global survey of 232 IT professionals finds respondents on average are using 6.28 tools to manage networks, with the top challenges being debugging/observability (61%), controlling egress or external access (35%), managing service-to-service (32%), performance under load (30%) and managing a plethora of instances of Kubernetes (27%).
Conducted by Isovalent, an arm of Cisco, the survey also identifies the top network security challenges as securing inter-cluster communication (52%), followed by enforcing policies across multiple clusters (39%), multi-tenancy segregation (39%), securing ingress/egress traffic (36%) and controlling operator and user access (33%).
Overall, survey respondents are, in the main, made up of platform, infrastructure, site reliability engineers and DevOps practitioners (66%), compared to 11% who are network specialists.
A full 60% said they are very involved in Kubernetes networking, with 36% managing self-hosted instances of Kubernetes or relying on Amazon Elastic Kubernetes Service (EKS) at 36% each. Azure Kubernetes Service (AKS) (27%) , Google Kubernetes Engine (GKE) (22%) and Red Hat OpenShift (19%) followed. Just over half (51%) are managing 11 to 100 nodes, with 8% managing more than 100 nodes.
In terms of workloads, microservices/web applications (93%) are most widely deployed, followed by databases (69%), virtual machines (36%) and artificial intelligence/machine learning (26%).
The most widely used networking tools are Grafana (75%), packet visualization tools (26%), click-to-trace applications: 26%, OpenTelemetry (19%), eBPF-based observability tools: 15% and the Istio service mesh (5%).
A full 60% are using the Cillium container networking interface (CNI) platform developed by Isovalent, followed by 25% that are using the Calico framework and 23% that are using virtual private clouds provided by Amazon Web Services (AWS).
However, the top ingress controller/service mesh adopted is NGINX Ingress (60%), followed by Istio (25%). The most widely used load balancers come from providers of public clouds, followed by Cillium at 29%.
Nico Vibert, a senior staff technical marketing engineer for Isovalent, said the survey makes it clear that, given the number of tools being used, transparency is a major challenge. That’s especially problematic when it comes to securing Kubernetes environments that typically run high-value workloads, he added.
It’s not clear why more networking professionals are not involved in Kubernetes environments, but as more clusters are deployed in production environments, they might gradually start to play a larger role, noted Vibert. The immediate challenge for them is to acquire the skills and expertise required to network not only Kubernetes clusters, but just as importantly, connect them to legacy application environments running in production environments, he added.
Each organization will need to determine how best to allocate networking responsibilities, but the one clear thing is that platform engineering methodologies are being more widely embraced. A full 43% of respondents said they are a member of a platform engineering team, which over time could expand to include networking specialists. Hopefully, as networking specialists become members of those teams, the cognitive load required to currently manage Kubernetes environments will continue to become more distributed.
In the meantime, however, each additional Kubernetes cluster added to those environments only further exacerbates inherent network management challenges that are today clearly taking time away from other tasks that DevOps and platform engineering teams are already finding it difficult to perform as more cloud-native applications continue to be deployed.


