F5 Extends Ability to Scale and Secure Network Traffic Across Kubernetes Clusters

F5 is making available an update to F5 BIG-IP Next Cloud-Native Network Functions (CNF) to make it simpler to scale network traffic horizontally across Kubernetes clusters.

Additionally, version 2.0 of F5 BIG-IP Next CNF provides support for faster DNS queries, an ability to enforce polices along with a set of unified firewall, distributed denial of service (DDoS) mitigations, intrusion prevention system (IPS) and carrier-grade network address translation services that can all be centrally managed.

The latest version of F5 BIG-IP Next CNF 2.0 makes use of disaggregation to reduce CPU utilization rates by a third, while simultaneously consolidating services to reduce infrastructure costs by more than 60%, the company claims.

Finally, F5 has certified BIG-IP Next CNF 2.0 compatibility with Red Hat OpenShift, an application development and deployment platform based on Kubernetes.

BIG-IP Next CNF 2.0 is designed to be deployed on top of the F5 Application Delivery and Security Platform (ADSP) to make it simpler to provide networking and security services at scale to cloud-native applications running across a distributed computing environment.

Mike Rau, senior vice president of technical enterprise strategy and business development at F5, said as more data-intensive applications that incorporate artificial intelligence (AI) models are deployed on Kubernetes clusters, the need for a platform capable of providing higher levels of network bandwidth by optimizing network traffic flows is becoming more acute. That’s critical because many more of these distributed applications are more latency-sensitive than previous generations of cloud-native applications, he added.

Just as equally importantly, IT teams need to be able to also centrally manage guardrails to ensure AI models are only exposed to data being accessed over application programming interfaces (APIs) that are managed and governed via the F5 ADSP, noted Rau.

It’s not clear what impact AI is having on network bandwidth requirements but the percentage of AI applications being deployed on Kubernetes clusters continues to climb. AI traffic, after all, is just another type of API traffic, said Rau. The challenge is finding a way to manage and secure AI applications that are invoking APIs at levels that are orders of magnitude greater than legacy applications, he added.

At the same time, more organizations are also discovering a new found appreciation for API security with the rise of AI models, which are subject to multiple cyberattack vectors that can lead to everything from data being extracted or, worse yet, deliberately poisoned in a way that leads to inaccurate outputs being generated, added Rau.

There are, of course, many approaches to securing access to APIs and associated networking services, but F5 is making a case of an approach that extends investments already made in its core ADSP.

The challenge now, of course, is to make sure that the teams building and deploying AI applications understand how existing investments in IT infrastructure can be extended to meet their requirements versus acquiring entirely different classes of infrastructure that someone then needs to support alongside all the existing infrastructure that has already been deployed.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1737 posts and counting. See all posts by Mike Vizard