AI Security - Testing Your Expanding Attack Surface
Basically, AI is creating code faster than we can check if it's safe.
AI-generated code is often insecure, with 62% testing as flawed. As AI agents call undocumented APIs, traditional security tools struggle. Snyk's AI-powered testing offers a solution.
What Happened
The rise of AI coding assistants has transformed software development, enabling developers to generate code at unprecedented speeds. However, this rapid production has outpaced security validation, raising significant concerns about the safety of AI-generated code. A recent benchmark study revealed that 62% of LLM-generated code is insecure or broken, highlighting the urgent need for effective testing methods. As organizations embrace AI, they must also address the vulnerabilities that come with it.
AI agents, which autonomously call APIs, often interact with undocumented and privileged endpoints. This creates a new attack surface that traditional security measures are ill-equipped to handle. With projections indicating that AI agents will become the primary consumers of enterprise APIs by 2028, the risk of exploitation is growing. One notable example is CVE-2025-12420, where an unauthenticated attacker could impersonate a ServiceNow administrator and gain full access to sensitive data. This underscores the necessity for advanced security testing in an AI-driven landscape.
Who's Being Targeted
The vulnerabilities in AI-generated code and the APIs they invoke put various stakeholders at risk, including organizations relying on AI for development and their end-users. Developers are often overwhelmed by the volume of alerts generated by traditional security tools, leading to alert fatigue. This can result in critical vulnerabilities being overlooked, as developers struggle to prioritize which issues to address.
Moreover, as AI agents increasingly handle sensitive operations, the potential for data breaches escalates. Organizations must recognize that the speed of AI code generation and the complexity of API interactions create an environment ripe for exploitation. If security measures are not adapted to this new reality, the consequences could be severe, affecting both the organization and its customers.
Tactics & Techniques
To combat these challenges, organizations need to adopt a more dynamic approach to security testing. Traditional static analysis tools can identify vulnerabilities in code but often fail to demonstrate how these vulnerabilities can be exploited in real-world scenarios. This is where dynamic testing comes into play, allowing security teams to probe applications and APIs in real-time, mimicking the actions of a malicious actor.
Modern dynamic testing solutions must be AI-powered, capable of keeping pace with the rapid development cycles of AI-generated code. By correlating static findings with dynamic proof, organizations can prioritize fixes based on actual exploitability rather than mere theoretical vulnerabilities. This approach not only enhances security but also instills confidence in developers, allowing them to ship code faster without compromising safety.
Defensive Measures
Organizations must evolve their security strategies to address the unique challenges posed by AI-generated code and agents. This includes implementing intelligent dynamic application security testing (DAST) that actively probes running applications and APIs. By continuously monitoring for vulnerabilities, organizations can ensure that they are not only identifying issues but also addressing them effectively.
Additionally, establishing a robust API discovery process is crucial. This helps organizations track the endpoints that AI agents are accessing, including undocumented ones. By embedding security measures within the development pipeline, organizations can achieve continuous coverage and reduce the risk of vulnerabilities slipping through the cracks.
In conclusion, the conversation around AI security must shift from merely securing AI models to understanding who is testing the code and APIs that AI generates. As AI continues to evolve, so too must our approaches to security, ensuring that we are prepared for the challenges ahead.
Snyk Blog