Red Hat Extends Container Alliance With NVIDIA to 5G Edge
At the Mobile World Congress event this week, Red Hat and NVIDIA extended their alliance to include deployment of containerized artificial intelligence (AI) applications and services at the emerging 5G network edge using the Red Hat OpenShift Platform.
Under terms of the agreement, NVIDIA will encourage telecommunications carriers and other providers of 5G networking services to deploy the Red Hat OpenShift platform based on Kubernetes on the NVIDIA EGX platform, which makes use of graphical processor units (GPU) to run AI applications in place of commodity processors.
Jered Floyd, a technology strategist in Red Hat’s Office of the CTO, says the goal is to make it easier for service providers to deploy NVIDIA Aerial, a software-defined radio service infused with AI capabilities that NVIDIA designed from the ground up to run as a cloud-native application. NVIDIA Aerial is based on a signal-processing engine that keeps all data within the GPU memory, while also providing low-latency access to network interface cards from Mellanox.
The alliance extension builds on an earlier agreement between the two companies focused on deploying Red Hat OpenShift on NVIDIA GPUs in data centers to build and deploy AI applications based on containers.
As carriers move to roll out 5G networking services, many of them are quickly embracing container network functions (CNFs) in place of virtual network functions (VNFs), which lock carriers into specific virtual machine architectures. Most carriers will end up running a mix of CNFs and VNFs for the foreseeable future; however, a marked preference for CNFs that are more portable across different classes of IT infrastructure is emerging.
Floyd says the scope of the Red Hat effort with NVIDIA is limited to running containers on gateways at the network edge versus deploying containers on endpoints connected to those gateways. Containers in their current form are still too large to deploy on individual devices, he notes. However, an industry effort is underway to create container architectures that are more granular, which theoretically would enable them to be deployed on any endpoint. Given the scope of carriers’ 5G ambitions, there will be a need to process data both at endpoint and gateway.
In the meantime, however, the rate at which AI applications are being built and deployed is being hampered by a general lack of DevOps processes. While there is no shortage of work being done to create AI models, it turns out that most data scientists and application developers have not defined a set of best practices for inserting AI models into applications and then updating those models whenever required. As a result, it may be while before AI models are deployed pervasively across highly distributed production environments. By employing Red Hat OpenShift, however, carriers would be putting themselves in a better position to leverage the container platform to apply DevOps to AI application development and deployment. In fact, at this juncture, it’s not so much a question of how AI models will be built, but rather how they will be managed efficiently and deployed at scale.