A critical vulnerability in the popular open-source 0DIN AI Scanner could allow authenticated users to execute arbitrary code inside the application container, according to a Friday security alert from GitHub and the National Vulnerability Database.
AI Scanner is an AI model safety scanner from 0DIN, Mozilla’s AI security team. The tool runs on top of NVIDIA garak, using it as the underlying LLM vulnerability testing engine.
NVIDIA garak is not the vulnerable component. The flaw, tracked as CVE-2026-41512 and rated with a CVSS 9.9 score, affects just the 0DIN AI Scanner, a web-based security tool built on garak.
The flaw affects anyone running AI Scanner versions 1.0.0 through 1.4.0. Users should update to version 1.4.1 or later immediately to reduce the risk of unauthorized access, code execution and data exposure, according to the advisory.
There are reports the flaw has been exploited in the wild.
The bug earned a rating of critical because any logged-in user could exploit it to run commands on a targeted server with the application’s permissions and allowing an adversary to steal database passwords, encryption keys, private API keys and other sensitive data. The bug also threatens tenant isolation, meaning one user could potentially reach data belonging to other users or organizations on the same installation, according to a GitHub description of the vulnerability.
The number of impacted users is unclear. Mozilla announced the open-source release of 0DIN AI Scanner in April 2026. While the project has drawn attention in the AI security community — with hundreds of GitHub stars and growing developer interest — no public enterprise adoption or download figures were immediately available.
The more important takeaway is that AI security tools themselves are increasingly becoming part of the AI attack surface.
Cobalt Strike, built for adversary simulation, became so widely abused by criminals that Europol coordinated a global crackdown on cracked copies in 2024.
Metasploit has long carried the same dual-use problem: defenders use it to validate exposure, while attackers can use the same framework outside authorized testing.
Kaseya VSA showed a related risk from another angle in 2021, when attackers abused privileged IT management software to push ransomware through MSP environments. CVE-2026-41512 brings that old lesson into the AI era: tools built to test risk can become part of the risk surface themselves.
The 0DIN flaw is not the same as the examples above and it does not suggest that AI scanners are under a sustained attacks.
The example does show that the new AI security stack is still software — and often highly privileged software. Tools run as web apps, containers, automation services and dashboards and handle credentials, API keys, model endpoints, scan results and tenant data that are tied to the same brittle enterprise attack surface.
Photo by David Pupăză on Unsplash