Solo.io Extends kagent Runtime to NemoClaw Governance Framework for AI Agents
Solo.io this week added support for the open source NemoClaw framework for safely deploying artificial intelligence (AI) agents in a kagent runtime environment on Kubernetes that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF).
Launched earlier this year, NemoClaw is a reference stack created by NVIDIA that enables single-command installation for AI agents such as OpenClaw. It integrates the NVIDIA OpenShell runtime to provide a sandbox environment in which IT teams can apply guardrails and enforce policies.
The kagent project, meanwhile, was originally developed by Solo.io to enable IT teams to declaratively deploy AI agents on Kubernetes clusters at a higher level of abstraction much like any other cloud-native workload.
Solo.io CEO Idit Levine said integrating NemoClaw with kagent adds an ability to more safely deploy AI agents at scale in a Kubernetes environment in a way that can be more easily governed and audited. Built-in telemetry and tracing enable IT teams to track exactly what actions an AI agent performed when.
The kagent project is one of a series of open source agentic AI initiatives that Solo.io has launched. There is also a unified Agentgateway that supports multiple protocols that has been contributed to the Linux Foundation and an Agentregistry for discovering, packaging, and distributing agents and tools that has been contributed to the CNCF.
Finally, there is an open source Agentevals project that captures evaluation and quality signals using open source OpenTelemetry software that is now widely supported by multiple observability platforms.
It’s not clear how many AI agents are being deployed in Kubernetes environments, but as more IT teams become involved in these projects there is going to be a lot more interest in deploying them on Kubernetes clusters that are already being used to run a range of other classes of workloads, said Levine. Most IT teams are not going to want to manage yet another type of platform just to run AI agents, she noted. In fact, most AI workloads will wind up being managed by platform engineering teams that have recently emerged to manage IT environments at scale using best DevOps practices, added Levine.
To further that goal, the CNCF has also defined a set of Kubernetes AI Requirements (KARs) for its Kubernetes AI Conformance Program to help ensure AI inference engines can run at scale on Kubernetes clusters. Stable in-place pod resizing, which lets inference models adjust their resources without needing to restart, and workload-aware scheduling to avoid resource deadlocks during distributed training are now, for example, mandatory requirements.
It’s still early days so far as deployments of AI agents in production environments are concerned but there is little doubt that thousands of them will soon be strewn across the enterprise. The one thing that is certain is that AI agents will consume massive amounts of IT infrastructure resources in ways that will be difficult to anticipate. As such, the challenge and the opportunity now is finding ways to ensure those AI agents are only accessing the relevant data and context needed to ensure they reliably automate a task.


