Canonical Adds MindSpore AI Framework to Kubeflow Distribution

At the Open Source Experience Paris conference, Canonical this week announced it has integrated MindSpore, an open source deep learning framework developed by Huawei, with its distribution of the open source Kubeflow platform for building artificial intelligence models on top of Kubernetes clusters.

Andreea Munteanu, product manager for Canonical, said Charmed Kubeflow provides a platform for managing machine learning operations (MLOps) on a Kubernetes cluster using a set of frameworks that are curated by Canonical on behalf of data science teams. The MindSpore framework is optimized for building AI models that enable computer vision.

MindSpore is already used by more than 5,000 businesses and has been downloaded over 2.49 million times. More than 6,600 developers have also contributed code to a project that is optimized to run on Huawei platforms. Data science teams can now choose the MindSpore image from the default Jupyter Lab image list that Canonical presents via Charmed Kubeflow.

Kubeflow is not the only cloud-native MLOps framework for building AI models on top of Kubernetes clusters, but as an open source project it is gaining traction as more of these models are built using containers that are orchestrated on Kubernetes clusters. Less clear at the moment is the degree to which MLOps and DevOps practices might ultimately converge. Some DevOps advocates are already contending that AI models are just another type of software artifact that can be stored in a Git repository. The workflows may be different, but there is no need for a separate repository to store AI artifacts, they argue.

Most data science teams today, however, are employing MLOPs platforms to share AI models, but in general, most organizations are a long way from defining a set of best practices for managing AI models, noted Munteanu. In many cases, the MLOps platform they chose came with a repository for storing and sharing artifacts.

It’s too early to say just how the building of AI models and traditional applications might converge, but ultimately an AI model needs to be deployed within the context of an application. Many data science teams need months to build an AI model before it can be deployed in a production environment. Aligning those efforts with the DevOps workflows that an organization uses to build the applications in which an AI model is embedded can prove challenging as the need to update and maintain AI models over time becomes more apparent. AI models need to be regularly updated as the assumptions used to build the model change or drift away from those assumption sets as more data becomes available.

Regardless of the approach to building AI models, the one thing that is certain is there will soon be a lot more of them. In fact, the bulk of new applications being developed today are going to invoke some type of AI capability. The challenge now is determining how best to manage the processes required to build and maintain them across data science and DevOps teams that have disparate cultures.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1620 posts and counting. See all posts by Mike Vizard