JFrog Extends Reach Into Realm of NVIDIA AI Microservices
JFrog today revealed it has integrated its platform for managing software supply chains with NVIDIA NIM, a microservices-based framework for building artificial intelligence (AI) applications.
Announced at a JFrog swampUP 2024 event, the integration is part of a larger effort to integrate DevSecOps and machine learning operations (MLOps) workflows that began with the recent JFrog acquisition of Qwak AI.
NVIDIA NIM gives organizations access to a set of pre-configured AI models that can be invoked via application programming interfaces (APIs) that can now be managed using the JFrog Artifactory model registry, a platform for securely housing and managing software artifacts, including binaries, packages, files, containers and other components.
The JFrog Artifactory registry is also integrated with NVIDIA NGC, a hub that houses a collection of cloud services for building generative AI applications, and the NGC Private Registry for sharing AI software.
JFrog CTO Yoav Landman said this approach makes it simpler for DevSecOps teams to apply the same version control techniques they currently use to manage which AI models are being deployed and updated.
Each of those AI models is packaged as a set of containers that enable organizations to centrally manage them regardless of where they run, he added. In addition, DevSecOps teams can continuously scan those modules, including their dependencies to both secure them and track audit and usage statistics at every stage of development.
The overall goal is to accelerate the pace at which AI models are regularly added and updated within the context of a familiar set of DevSecOps workflows, said Landman.
That’s critical because many of the MLOps workflows that data science teams created replicate many of the same processes already used by DevOps teams. For example, a feature store provides a mechanism for sharing models and code in much the same way DevOps teams use a Git repository. The acquisition of Qwak provided JFrog with an MLOps platform through which it is now driving integration with DevSecOps workflows.
Of course, there will also be significant cultural challenges that will be encountered as organizations look to meld MLOps and DevOps teams. Many DevOps teams deploy code multiple times a day. In comparison, data science teams require months to build, test and deploy an AI model. Savvy IT leaders should take care to make sure the current cultural divide between data science and DevOps teams doesn’t get any wider. After all, it’s not so much a question at this juncture whether DevOps and MLOps workflows will converge as much as it is to when and to what degree. The longer that divide exists, the greater the inertia that will need to be overcome to bridge it becomes.
At a time when organizations are under more economic pressure than ever to reduce costs, there may be no better time than the present to identify a set of redundant workflows. After all, the simple truth is building, updating, securing and deploying AI models is a repeatable process that can be automated and there are already more than a few data science teams that would prefer it if someone else managed that process on their behalf.