Curved Kubernetes: Microsoft Workload Orchestration in Azure Arc
The constant march to simplify Kubernetes deployments takes a new turn. Microsoft has confirmed the release and general availability of “workload orchestration” as a new service in Azure Arc, the company’s extension mechanism that enables Azure cloud management capabilities to work outside Azure in areas such as edge and other cloud environments.
The modestly lower-case branded workload orchestration service is designed to simplify how software application development teams deploy and manage Kubernetes-based applications across distributed (and typically diverse) edge environments.
One Size Does Not Fit All
That typical diversity of deployment architecture and configuration comes down to the fact that – again, typically – cloud estates running in manufacturing, retail, healthcare, construction and other environments face challenges in managing varied site-specific configurations.
For example, the data input/output pipe on a modern construction site is clearly different from that which might exist in a hospital running several MRI scanners. Other differences manifest themselves across storage requirements, networking configurations and the fact that the hospital might have to run in an air-gapped environment. Even further diversity will exist when we take into account regional (human) language translation requirements, safety provisioning controls and the actual number of devices that need to be supported in the field.
A traditional method to cope with this diversity is to duplicate application variants and attempt to use this approach as a shortcut mechanism. Essentially, the means teams would be creating and maintaining multiple variants of the same application for different sites. This is an error-prone, costly and hard-to-scale approach.
A Centralized Template-Driven Model
Microsoft says that workload orchestration solves these issues by employing a centralized, template-driven model. This enables cloud management teams to define configurations once for each specific use case to the granular grade needed for each deployment. They can then deploy those configurations across all sites and allow local teams to adjust, within appropriate guardrails.
Those guardrails are in place to ensure consistency is still maintained, even when custom adjustments are made. This is promised to work functionally and effectively for CI/CD workflows, whether they need to support a handful of factories, several hundred offline retail clusters, or regionally compliant hospital apps.
Microsoft provided an example scenario where a manufacturing organization has multiple factory locations running a different number of computers, each with their own safety thresholds and application configuration parameters… some in English, some in Spanish.
“Whenever there’s an update to this app – like a new feature or a bug fix – it has to be deployed carefully to each computer in every location, making sure it keeps the factory-specific settings intact. That’s already a big job for one app. But in reality, factories don’t run on just one app. They have many – some monitoring sensors, others doing predictive maintenance, some running on old Windows systems, others powered by AI,” explained the company, on its Learn Microsoft pages.
Sector-Agnostic Headaches
These challenges occur in every industry; workload orchestration headaches are sector-agnostic and are found in industries, including retail, restaurants, energy and healthcare, where distributed operations rely on consistent, localized applications.
All workload orchestration resources are managed through Microsoft Azure Resource Manager, enabling Role-Based Access Control (RBAC) and consistent governance. DevOps engineers can interact with workload orchestration functions using the Command Line Interface (CLI) and Azure portal. Above the CLI, Microsoft says that some non-coders (by which it means operational technology administrators) will also benefit from a user-friendly interface for authoring, monitoring and deploying solutions with site-specific configurations.
Context-Aware Rollouts
As an additional (and important) function here, workload orchestration supports context-aware rollouts. This means configurations will be able to adapt to different development environments i.e., software development lifecycle phases such as initial or core development, testing and quality assurance processes… and into final live production.
This technology features container image preloading and dependency management controls that are built in from the start. Microsoft says it wants teams to be able to achieve hassle-free updates in all scenarios, even in those where the maintenance window is short. Security and operations observability is provided via integrations with Azure Monitor and OpenTelemetry. Redmond makes note of Kubernetes diagnostics in workload orchestration that provide full-stack observability by capturing container logs, Kubernetes events, system logs and deployment errors.
Microsoft principal group product manager Supriyo Banerjee encourages users to try workload orchestration with a small application deployed to a few edge sites. “Create a template, define parameters like site name or configuration toggles and run a deployment. As you grow more comfortable, expand to more sites or complex applications,” he said.