Noma Security Identifies Security Flaw in Docker AI Assistant
Noma Security today revealed it has discovered a critical security flaw in the artificial intelligence (AI) assistant made available by Docker, Inc.
Sasi Levi, security research lead for Noma Security, said the flaw, dubbed DockerDash, enables a malicious actor to use a single malicious metadata label in a Docker image to compromise the Ask Gordon AI assistant using a simple three-stage attack that starts with an indirect prompt.
Once the Ask Gordon AI assistant reads and interprets that malicious instruction, it forwards it to a Model Context Protocol (MCP) Gateway that provides access to an application development tool to execute the instruction. Because each of those stages of the software engineering workflow does not require any human validation, it’s relatively easy for a cybercriminal to compromise the entire Docker environment, noted Levi.
Additionally, an associated data exfiltration vulnerability exploits the same prompt-injection flaw as the RCE vulnerability but targets Docker Desktop’s implementation of Ask Gordon AI. While Docker Desktop restricts Ask Gordon to read-only permissions, this constraint doesn’t prevent information disclosure. An attacker can still weaponize Ask Gordon’s read access to exfiltrate sensitive internal data about the victim’s environment.
The core issue is a failure of contextual trust. An MCP Gateway cannot distinguish between informational metadata, such as a standard Docker LABEL, and a pre-authorized, runnable internal instruction. By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process to launch a Meta-Context Injection attack.
As a provider of a platform for securing AI tools and agents, Noma Security is in the process of researching security flaws in AI coding tools. In fact, just about any AI tool being used by application development teams, in the absence of any controls or policies being put in place, can be similarly manipulated using a malicious indirect prompt injection attack, said Levi.
The simple truth is that it has not been this easy to compromise an IT environment since the early days of IT when operating systems were deployed without any security protocols, noted Levi.
The challenge is application developers are adopting AI tools with little to no understanding of their inherent risks, he added. A recent Futurum Group survey, for example, finds 60% are already using AI tools to build software, with another 40% preparing to increase investments in generative AI coding tools in the next 12 to 18 months. As such, it’s now a question of how often cybercriminals will be able to compromise these tools to compromise software supply chains before DevSecOps teams are able to craft and apply the appropriate policies.
In the meantime, application developers would be well advised to make sure that AI coding tools are being used safely. Modern cloud-native applications tend to be among the most mission critical applications that organizations are deploying so potential risks are substantial.
Ultimately, however, it’s not so much a question of preventing application developers from using AI coding tools so much as it is making sure the right governance and security controls are consistently observed.


