AI Security - CISOs Struggle with Legacy Tools and Skills
Basically, security leaders are using old tools to protect new AI systems, which doesn't work well.
A new report reveals that security leaders are struggling to secure AI systems effectively. With outdated tools and skills, organizations face significant risks. It's time to address these gaps in AI security.
What Happened
A recent study by Pentera highlights a crucial issue in cybersecurity: CISOs are struggling to secure AI systems effectively. The AI and Adversarial Testing Benchmark Report 2026 surveyed 300 US Chief Information Security Officers and senior security leaders. It found that many security teams are relying on outdated skills and tools that are ill-suited for the complexities of AI.
The report reveals that AI adoption is outpacing security visibility. As AI systems integrate into various corporate technologies, 67% of CISOs reported limited visibility into their usage. This lack of oversight raises questions about how AI systems operate and the potential risks they pose.
Skills, Not Budget, Are the Primary Barrier
Interestingly, the study indicates that the main challenge is not financial. While many organizations are willing to invest in AI security, 50% of CISOs cited a lack of internal expertise as their biggest hurdle. Other significant challenges include limited visibility into AI usage (48%) and insufficient AI-specific security tools (36%). Only 17% mentioned budget constraints as a primary concern.
This suggests that organizations recognize the need for better security but are struggling to find the right skills to assess AI-related risks. As AI systems introduce new behaviors and access patterns, security teams face the daunting task of adapting to these changes without adequate training.
Legacy Controls Are Carrying Most of the Load
In the absence of AI-specific best practices, many enterprises are extending existing security controls to cover AI infrastructure. The report found that 75% of CISOs rely on legacy security controls, such as endpoint and application security tools, to protect AI systems. Only a mere 11% reported having tools specifically designed for AI security.
This reliance on traditional controls reflects a familiar pattern seen in past technology shifts. While it offers some basic coverage, these legacy systems may not effectively address the unique risks associated with AI, such as altered access patterns and expanded attack surfaces.
A Familiar Challenge, Now Applied to AI
The findings of the report underscore that the challenges surrounding AI security stem from foundational gaps rather than a lack of awareness. As AI becomes a core part of enterprise infrastructure, organizations must focus on building expertise and improving their validation of security controls across environments where AI operates. The report emphasizes the need for specialized skills and tools tailored to the AI landscape to enhance security posture effectively.
For a more in-depth understanding of the findings, download the AI and Adversarial Testing Benchmark Report 2026. This report provides critical insights into the current state of AI security and the necessary steps organizations must take to address these challenges.
The Hacker News