AI & SecurityMEDIUM

AI Security - Treat AI as a Junior Developer for Coding Errors

SCSC Media
AI coding assistantsvulnerabilitiesDeepSeek OCR AppSaaS-StarterOX Security
🎯

Basically, AI can make coding mistakes like a beginner programmer.

Quick Summary

At RSAC 2026, experts revealed that AI coding tools often produce vulnerabilities similar to those of junior developers. This raises concerns for organizations relying on AI for secure coding. It's crucial to adopt AI cautiously and implement specific security guidelines to mitigate risks.

What Happened

At RSAC 2026, Eyal Paz and Nir Zadok from OX Security shared insights on AI's role in coding. They discussed how AI coding assistants often produce insecure code, similar to a junior developer's work. Their findings highlighted vulnerabilities like path traversal, cross-site scripting (XSS), and server-side request forgery (SSRF). These issues arise from AI's reliance on human-coded examples, leading to predictable flaws in generated code.

The researchers tested several AI coding tools, including Lovable and Base44. They found that these tools struggled to detect and fix vulnerabilities, even when provided with specific security instructions. For instance, Lovable only identified issues in two out of three attempts during pre-deployment scans. This inconsistency raises concerns about the reliability of AI in secure software development.

Who's Affected

The vulnerabilities identified affect developers and organizations utilizing AI coding tools. Open-source projects like the DeepSeek OCR App and SaaS-Starter were highlighted as examples. The DeepSeek app, which processes PDFs, was found vulnerable to unauthenticated remote code execution. This flaw allows malicious users to upload harmful files, potentially compromising the server. Meanwhile, SaaS-Starter had multiple vulnerabilities, including open redirects and SSRF flaws, which could lead to severe security breaches.

These findings emphasize the need for developers to be cautious when integrating AI into their coding processes. Organizations relying on these tools may inadvertently introduce risks if they do not implement stringent security measures.

What Data Was Exposed

The vulnerabilities discovered in these projects could expose sensitive data and allow unauthorized access to internal services. For instance, the SSRF flaws in SaaS-Starter could potentially reveal confidential information or lead to data leaks. The open redirect issue could misdirect users to phishing sites, putting their data at risk.

Although the developer of SaaS-Starter has since patched these vulnerabilities, the incident serves as a reminder of the potential dangers associated with AI-generated code. The speed at which AI can produce code increases the risk of deploying insecure applications without proper vetting.

What You Should Do

Organizations should approach AI coding tools with caution. Experts recommend a gradual adoption strategy, allowing teams to assess the effectiveness of these tools over time. It is crucial to provide AI with specific security guidelines to mitigate common vulnerabilities.

Additionally, developers should conduct thorough code reviews and testing, even when using AI-generated code. By treating AI like a junior developer, teams can better manage the risks associated with its use. As Eyal Paz concluded, there is a responsible way to leverage AI for faster code creation while maintaining security standards.

🔒 Pro insight: The findings underscore the necessity for robust security practices when utilizing AI in software development, especially in open-source environments.

Original article from

SC Media

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Measuring Cyber Readiness with Gibb Witham

Gibb Witham discusses the importance of measurable cyber readiness in an AI-driven world. Organizations must validate their AI capabilities to enhance security. This proactive approach is crucial for effective cybersecurity.

SC Media·
MEDIUMAI & Security

AI Security - Building Cyber Risk Intelligence Layer Explained

IBM's Srinivas Tummalapenta discusses a new AI-driven approach to cybersecurity. This method aims to unify data for better risk management. Understanding these insights is vital for businesses to enhance their security strategies.

SC Media·
MEDIUMAI & Security

AI Security - Trustworthy Agents Enhance Operations

Arctic Wolf introduces the Swarm of Experts, a new AI framework enhancing security operations. This coordinated approach aims to improve threat investigations. Organizations can benefit from better decision-making and efficiency in cybersecurity.

Arctic Wolf Blog·
HIGHAI & Security

AI Security - Mimecast's Insights on New Threats

Mimecast's Rob Juncker warns of rising AI threats in cybersecurity. Many organizations are unprepared, risking sensitive data exposure. It's crucial to develop effective strategies to combat these challenges.

SC Media·
MEDIUMAI & Security

AI Security - Building Cyber Risk Intelligence Layer Explained

A new cyber risk intelligence layer is emerging, leveraging AI models for actionable insights. This evolution is crucial for effective decision-making in cybersecurity. Experts discuss how to transform security data into real-time insights.

SC Media·
MEDIUMAI & Security

AI Security - Microsoft’s Arunesh Chandra on Browser Evolution

Microsoft's Arunesh Chandra reveals how browsers are evolving in the AI era. He discusses Edge for Business as a secure solution for IT teams. This shift is crucial for safeguarding data and enhancing productivity.

SC Media·