AI & SecurityHIGH

AI Security - Testing Your Expanding Attack Surface

🎯

Basically, AI is creating code faster than we can check if it's safe.

Quick Summary

AI-generated code is often insecure, with 62% testing as flawed. As AI agents call undocumented APIs, traditional security tools struggle. Snyk's AI-powered testing offers a solution.

What Happened

The rise of AI coding assistants has transformed software development, enabling developers to generate code at unprecedented speeds. However, this rapid production has outpaced security validation, raising significant concerns about the safety of AI-generated code. A recent benchmark study revealed that 62% of LLM-generated code is insecure or broken, highlighting the urgent need for effective testing methods. As organizations embrace AI, they must also address the vulnerabilities that come with it.

AI agents, which autonomously call APIs, often interact with undocumented and privileged endpoints. This creates a new attack surface that traditional security measures are ill-equipped to handle. With projections indicating that AI agents will become the primary consumers of enterprise APIs by 2028, the risk of exploitation is growing. One notable example is CVE-2025-12420, where an unauthenticated attacker could impersonate a ServiceNow administrator and gain full access to sensitive data. This underscores the necessity for advanced security testing in an AI-driven landscape.

Who's Being Targeted

The vulnerabilities in AI-generated code and the APIs they invoke put various stakeholders at risk, including organizations relying on AI for development and their end-users. Developers are often overwhelmed by the volume of alerts generated by traditional security tools, leading to alert fatigue. This can result in critical vulnerabilities being overlooked, as developers struggle to prioritize which issues to address.

Moreover, as AI agents increasingly handle sensitive operations, the potential for data breaches escalates. Organizations must recognize that the speed of AI code generation and the complexity of API interactions create an environment ripe for exploitation. If security measures are not adapted to this new reality, the consequences could be severe, affecting both the organization and its customers.

Tactics & Techniques

To combat these challenges, organizations need to adopt a more dynamic approach to security testing. Traditional static analysis tools can identify vulnerabilities in code but often fail to demonstrate how these vulnerabilities can be exploited in real-world scenarios. This is where dynamic testing comes into play, allowing security teams to probe applications and APIs in real-time, mimicking the actions of a malicious actor.

Modern dynamic testing solutions must be AI-powered, capable of keeping pace with the rapid development cycles of AI-generated code. By correlating static findings with dynamic proof, organizations can prioritize fixes based on actual exploitability rather than mere theoretical vulnerabilities. This approach not only enhances security but also instills confidence in developers, allowing them to ship code faster without compromising safety.

Defensive Measures

Organizations must evolve their security strategies to address the unique challenges posed by AI-generated code and agents. This includes implementing intelligent dynamic application security testing (DAST) that actively probes running applications and APIs. By continuously monitoring for vulnerabilities, organizations can ensure that they are not only identifying issues but also addressing them effectively.

Additionally, establishing a robust API discovery process is crucial. This helps organizations track the endpoints that AI agents are accessing, including undocumented ones. By embedding security measures within the development pipeline, organizations can achieve continuous coverage and reduce the risk of vulnerabilities slipping through the cracks.

In conclusion, the conversation around AI security must shift from merely securing AI models to understanding who is testing the code and APIs that AI generates. As AI continues to evolve, so too must our approaches to security, ensuring that we are prepared for the challenges ahead.

🔒 Pro insight: The rapid integration of AI in development necessitates a paradigm shift in security testing to mitigate emerging vulnerabilities effectively.

Original article from

Snyk Blog

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Cloudflare Launches Kimi K2.5 Model

Cloudflare has launched the Kimi K2.5 model on Workers AI, enhancing agent capabilities. This innovation significantly reduces inference costs, making AI more accessible for enterprises. As AI adoption grows, Cloudflare's solution addresses the need for cost-effective, scalable AI agents.

Cloudflare Blog·
MEDIUMAI & Security

AI Security - Microsoft Introduces Zero Trust for AI

Microsoft has launched Zero Trust for AI, providing new tools and guidance for secure AI integration. This initiative helps organizations manage unique AI risks effectively. Stay ahead of potential threats with these updated resources.

Microsoft Security Blog·
MEDIUMAI & Security

AI Security - Salt Security Launches New Protection Platform

Salt Security has launched a new platform to secure AI agents within enterprises. This tool enhances visibility and governance, helping organizations safely adopt AI technologies. As AI integration grows, so does the need for effective security measures. Stay ahead of potential risks with this innovative solution.

IT Security Guru·
HIGHAI & Security

AI Security - Vibe Hacking Emerges as a New Threat

A new threat called vibe hacking is emerging, using AI to empower less skilled attackers. Recent breaches show how AI tools enable these cybercriminals, raising serious security concerns. Organizations must adapt to this evolving threat landscape to protect sensitive data.

SC Media·
HIGHAI & Security

AI Security - Protecting Homegrown Agents with CrowdStrike

CrowdStrike and NVIDIA have teamed up to enhance AI security. Their new integration protects homegrown AI agents from attacks and data leaks. This is vital as AI becomes a key business tool.

CrowdStrike Blog·
MEDIUMAI & Security

AI Security - Monitoring Internal Coding Agents Explained

OpenAI is monitoring its coding agents to prevent misalignment. This initiative aims to enhance AI safety and reduce risks. Understanding these measures is vital for responsible AI development.

OpenAI News·