PraisonAI has patched a critical sandbox escape vulnerability in versions prior to 1.5.115 that allowed AI-generated code to bypass security restrictions and execute arbitrary commands on host systems.
PraisonAI, a popular production-ready AI agent framework, issued a patch that has a CVE score of 10. As of this report, the creator of the framework is says the affected versions include “all versions shipping sandbox_mode=”sandbox” (default since introduction) through 1.5.113”. Users are advised to upgrade to version 1.5.115 immediately to address this risk.
Palo Alto Networks describes the rise of AI framework vulnerabilities as an “agentic AI security crisis” as organizations struggle to secure autonomous agents with broad, human-free access to sensitive data.
PraisonAI joins a growing list of frameworks that have recently mitigated similar issues. Last month flaws found in tools like Langflow and CrewAI demonstrate a trend where the rush to enable code execution has outpaced necessary safety measures, according experts.
What is PraisonAI?
PraisonAI coordinates a “digital repair crew” of AI agents that automatically monitor, diagnose, and fix data pipeline issues 24/7. This autonomous, self-healing approach removes the need for manual, reactive repairs, allowing company data systems to grow reliably without constant human intervention, according to an overview sourced from IT services and consulting firm AIMultiple.
The PraisonAI vulnerability functions as a “security domino effect,” where an attacker intentionally crashes a sandboxed process to exploit unblocked attributes in the error traceback, resulting in Remote Code Execution. By navigating through exposed internal controls (tb_frame, f_back, f_builtins), the exploit escapes the sandbox to run arbitrary commands on the host system, according to the technical details of the sandbox escape, on GitHub.