Canonical Simplifies Deployment of Cloud-Native Infrastructure

Canonical has made available a virtual appliance for the Amazon Web Services (AWS) cloud based on its lightweight MicroK8s distribution of Kubernetes along with a Charmed Operator for open source Kubeflow software running on that cloud platform to manage machine learning operations (MLOps) on Kubernetes clusters.

In addition, Canonical adds a Charmed Operator for automating the deployment of open source Kubevirt software that makes it possible to encapsulate virtual machines in a container.

Alex Jones, director of Kubernetes engineering for Canonical, says the overall goal is to automate the deployment of instances of curated open source software in a way that enables IT teams to accelerate the rate at which cloud-native applications can be built and deployed.

Charmed Operators are based on the Juju automation framework that Canonical developed to automate the deployment of open source software the company curates on behalf of IT organizations. In addition to being applied to cloud native platforms, the framework can also be used to customize deployments of the Ubuntu distribution of Linux that Canonical curates.

That’s critical as organizations, for example, look to leverage containers and Kubernetes running on a distribution of Linux that hosts an MLOps framework such as Kubeflow, notes Jones. Eventually, DevOps and MLOps frameworks will also converge to the point where artifacts will also be able to share common repositories that IT teams will need to manage and monitor, he added.
In the longer term, IT organizations are also slowly embracing a hybrid approach to computing in the age of Kubernetes to access data stored in both public and private clouds, says Jones.

Regardless of the approach to building and deploying cloud-native applications, IT environments are becoming more complex to manage. Canonical is making a case for a consistent approach to managing open source software, whether it is deployed in the cloud, a local data center or at the network edge to help reduce the total cost of IT.

It’s not clear whether organizations are now trying to standardize on a common automation framework. Most IT teams today make use of multiple frameworks to automate various tasks, but with the rise of platform engineering teams, there is now a move toward centralizing automation management within the context of a shared set of DevOps resources.

In the meantime, the need for higher levels of abstraction to make platforms such as Kubernetes more accessible is becoming crucial. As organizations find themselves deploying fleets of Kubernetes clusters across highly distributed IT environments, there’s a greater need for an automation framework that makes provisioning of the platforms that modern cloud-native applications depend on simpler for both DevOps teams and traditional IT administrators.

After all, no matter how much IT environments become automated, there simply isn’t enough cloud-native expertise available. In fact, it’s that lack of expertise that is arguably holding back the deployment of more of these applications.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1729 posts and counting. See all posts by Mike Vizard