AI & SecurityHIGH

AI Security - Application Development Risks Explained

SCSC Media
ApiiroIdan PlotnikAI coding assistantsapplication securityvulnerability management
🎯

Basically, AI tools are speeding up coding but creating new security problems.

Quick Summary

AI coding assistants are revolutionizing software development, but they're also introducing new security risks. Idan Plotnik explains how these changes impact security teams and developers alike. Understanding these dynamics is crucial for maintaining application security in a fast-paced environment.

What Happened

In a recent interview at RSAC 2026, Idan Plotnik, co-founder of Apiiro, highlighted the dramatic impact of AI coding assistants on software development. These tools are generating more code at a pace that traditional security teams struggle to manage. As organizations rapidly adopt these AI-driven solutions, the risk landscape is evolving, leaving security measures outdated and ineffective.

The surge in AI-generated code brings with it a host of vulnerabilities. Plotnik emphasized that as developers rely more on these tools, they may overlook potential security flaws. This shift in development practices is prompting a reevaluation of how application security is approached, particularly in light of the increased velocity of code changes.

Who's Being Targeted

The primary stakeholders affected by this shift include CISOs and security teams who are tasked with safeguarding applications. As AI coding assistants become commonplace, these professionals face the challenge of maintaining visibility and control over the security of rapidly evolving codebases. Developers, too, are at risk, as they may inadvertently introduce vulnerabilities while prioritizing speed and efficiency over security.

Organizations across various sectors are adopting these AI tools, making it crucial for security teams to adapt quickly to the new dynamics of software development. The loss of visibility into the code being generated can lead to significant security gaps, affecting the overall integrity of applications.

Risks of AI-Generated Code and Developer Blind Spots

The use of AI in coding presents unique challenges. Plotnik pointed out that traditional vulnerability management models are no longer sufficient. AI-generated code can introduce blind spots for developers, who may not fully understand the implications of the code being produced. This can lead to an increase in vulnerabilities that go unnoticed until they are exploited.

Moreover, the rapid pace of AI coding can create a scenario where vulnerabilities are shipped before they can be adequately addressed. This not only increases the risk of breaches but also complicates compliance with security standards. The need for proactive security measures has never been more pressing.

How to Protect Yourself

To mitigate these risks, organizations must rethink their approach to application security. Plotnik advocates for the adoption of secure coding practices that integrate seamlessly with AI tools. This includes implementing secure prompt technology to guide developers in writing secure code from the outset.

Training and awareness programs for developers are essential, but they must evolve to address the unique challenges posed by AI. Security teams should also focus on enhancing their visibility into AI-generated code, ensuring that they can identify and address vulnerabilities before they become a problem. By fostering a culture of security within development teams, organizations can better balance the demands of speed and security in the age of AI-driven development.

🔒 Pro insight: The rapid adoption of AI coding tools necessitates a fundamental shift in vulnerability management strategies to keep pace with evolving risks.

Original article from

SC Media

Read Full Article

Related Pings

HIGHAI & Security

Agentic AI - Understanding Security Risks in Enterprises

Enterprises are facing new security challenges with agentic AI adoption. As organizations navigate hidden risks, effective management is crucial. Discover how to balance innovation with security controls.

SC Media·
MEDIUMAI & Security

AI & Security - Bridging the Gap in Exposure Management

AI is changing how we manage exposure in cybersecurity. Chris Wallis discusses the confidence gap between executives and security teams. Understanding this gap is crucial for effective risk management.

SC Media·
HIGHAI & Security

AI Security - Maximizing Safe Usage Through Observability

AI adoption is skyrocketing, but security measures are lagging. Organizations must understand AI agents' actions to ensure safe usage. Prioritizing observability is key.

SC Media·
HIGHAI & Security

AI Supply Chain Attacks - Poisoned Documentation Risks Explained

A new proof-of-concept reveals that AI supply chain attacks can exploit unvetted documentation. This poses significant risks to developers using Context Hub. Understanding these vulnerabilities is crucial for maintaining secure coding practices.

The Register Security·
HIGHAI & Security

AI Security - NCSC Urges Caution with Coding Tools

The NCSC warns that AI coding tools could spread vulnerabilities if not properly managed. Security professionals must ensure safeguards are integrated from the start. This initiative highlights the critical balance between innovation and security in software development.

SC Media·
MEDIUMAI & Security

AI Security - Businesses Urged Not to Shift Budgets

Experts warn against rushing AI investments at the cost of existing cybersecurity measures. Companies must balance their budgets to ensure robust defenses against evolving threats.

Cybersecurity Dive·