Docker Inc. Allies with NanoCo to Deploy General-Purpose AI Agent Safely
Docker Inc. has formed an alliance that makes it simpler to deploy a lightweight NanoClaw artificial intelligence (AI) agent from NanoCo in Docker Sandboxes that ensure any AI agent is only able to access a limited set of IT resources.
NanoClaw is an open source AI agent alternative to OpenClaw but has been designed to run in an isolated container. A Docker Sandbox, meanwhile, is based on a framework that makes use of micro virtual machines (MicroVMs) to provide deeper isolation.
Docker Inc. president and COO Mark Cavage said the overall goal is to make it as safe as possible to deploy a general purpose AI agent in a way that minimizes the overall size of the attack surface exposed while simultaneously limiting what data or external accounts an AI agent is allowed to access.
NanoCo CEO Gavriel Cohen said that in addition to limiting which files NanoClaw might access it also ensures that AI agents are not communicating with other AI agents that might be asked to perform a task that might prove harmful, such as deleting a database running in a production environment.
While OpenClaw has generated a lot of interest, it’s not clear how many organizations have approved deploying it. Many organizations, for example, are limiting risk by deploying OpenClaw on an isolated machine such as a Mac mini to limit the amount of files an AI agent is allowed to access. Docker, Inc. is making a case for an alternative approach that employs a lightweight sandbox based on a MicroVM framework it previously developed.
There is, of course, all of a sudden no shortage of general purpose AI agents but NanoClaw is different in that it is based on 15 core source files, which reduces the amount of lines of code that need to be deployed by up to 100 times compared to other AI agents, noted Cohen.
The challenge, ultimately, is to find a way to safely deploy general-purpose AI agents that are subject to, for example, a prompt injection attack that might instruct them to share sensitive data with a malicious actor or, just as concerning, wreak havoc by making changes to a production IT environment. The AI agent itself isn’t inherently malicious but in its effort to complete a task there is often a tendency to ignore many of the guardrails that IT teams might have put in place. More troubling still, AI agents will also expose governance gaps where no one had previously thought there was a need for some type of guardrail in the first place.
Regardless of how general-purpose AI agents are eventually deployed the one thing that is certain is they are not going away. IT teams will be expected to find ways to minimize the risk without instituting outright bans that are likely to be often ignored in the name of increasing productivity. Given the mission, the onus for determining which general-purpose AI agent to deploy is now on the shoulders of the IT and cybersecurity teams that will be tasked with cleaning up whatever incident will inevitably occur if end users are left to their own devices.


