AI Security - Treat AI as a Junior Developer for Coding Errors
Basically, AI can make coding mistakes like a beginner programmer.
At RSAC 2026, experts revealed that AI coding tools often produce vulnerabilities similar to those of junior developers. This raises concerns for organizations relying on AI for secure coding. It's crucial to adopt AI cautiously and implement specific security guidelines to mitigate risks.
What Happened
At RSAC 2026, Eyal Paz and Nir Zadok from OX Security shared insights on AI's role in coding. They discussed how AI coding assistants often produce insecure code, similar to a junior developer's work. Their findings highlighted vulnerabilities like path traversal, cross-site scripting (XSS), and server-side request forgery (SSRF). These issues arise from AI's reliance on human-coded examples, leading to predictable flaws in generated code.
The researchers tested several AI coding tools, including Lovable and Base44. They found that these tools struggled to detect and fix vulnerabilities, even when provided with specific security instructions. For instance, Lovable only identified issues in two out of three attempts during pre-deployment scans. This inconsistency raises concerns about the reliability of AI in secure software development.
Who's Affected
The vulnerabilities identified affect developers and organizations utilizing AI coding tools. Open-source projects like the DeepSeek OCR App and SaaS-Starter were highlighted as examples. The DeepSeek app, which processes PDFs, was found vulnerable to unauthenticated remote code execution. This flaw allows malicious users to upload harmful files, potentially compromising the server. Meanwhile, SaaS-Starter had multiple vulnerabilities, including open redirects and SSRF flaws, which could lead to severe security breaches.
These findings emphasize the need for developers to be cautious when integrating AI into their coding processes. Organizations relying on these tools may inadvertently introduce risks if they do not implement stringent security measures.
What Data Was Exposed
The vulnerabilities discovered in these projects could expose sensitive data and allow unauthorized access to internal services. For instance, the SSRF flaws in SaaS-Starter could potentially reveal confidential information or lead to data leaks. The open redirect issue could misdirect users to phishing sites, putting their data at risk.
Although the developer of SaaS-Starter has since patched these vulnerabilities, the incident serves as a reminder of the potential dangers associated with AI-generated code. The speed at which AI can produce code increases the risk of deploying insecure applications without proper vetting.
What You Should Do
Organizations should approach AI coding tools with caution. Experts recommend a gradual adoption strategy, allowing teams to assess the effectiveness of these tools over time. It is crucial to provide AI with specific security guidelines to mitigate common vulnerabilities.
Additionally, developers should conduct thorough code reviews and testing, even when using AI-generated code. By treating AI like a junior developer, teams can better manage the risks associated with its use. As Eyal Paz concluded, there is a responsible way to leverage AI for faster code creation while maintaining security standards.
SC Media