AI & SecurityHIGH

AI Security - CISOs Struggle with Legacy Tools and Skills

🎯

Basically, security leaders are using old tools to protect new AI systems, which doesn't work well.

Quick Summary

A new report reveals that security leaders are struggling to secure AI systems effectively. With outdated tools and skills, organizations face significant risks. It's time to address these gaps in AI security.

What Happened

A recent study by Pentera highlights a crucial issue in cybersecurity: CISOs are struggling to secure AI systems effectively. The AI and Adversarial Testing Benchmark Report 2026 surveyed 300 US Chief Information Security Officers and senior security leaders. It found that many security teams are relying on outdated skills and tools that are ill-suited for the complexities of AI.

The report reveals that AI adoption is outpacing security visibility. As AI systems integrate into various corporate technologies, 67% of CISOs reported limited visibility into their usage. This lack of oversight raises questions about how AI systems operate and the potential risks they pose.

Skills, Not Budget, Are the Primary Barrier

Interestingly, the study indicates that the main challenge is not financial. While many organizations are willing to invest in AI security, 50% of CISOs cited a lack of internal expertise as their biggest hurdle. Other significant challenges include limited visibility into AI usage (48%) and insufficient AI-specific security tools (36%). Only 17% mentioned budget constraints as a primary concern.

This suggests that organizations recognize the need for better security but are struggling to find the right skills to assess AI-related risks. As AI systems introduce new behaviors and access patterns, security teams face the daunting task of adapting to these changes without adequate training.

Legacy Controls Are Carrying Most of the Load

In the absence of AI-specific best practices, many enterprises are extending existing security controls to cover AI infrastructure. The report found that 75% of CISOs rely on legacy security controls, such as endpoint and application security tools, to protect AI systems. Only a mere 11% reported having tools specifically designed for AI security.

This reliance on traditional controls reflects a familiar pattern seen in past technology shifts. While it offers some basic coverage, these legacy systems may not effectively address the unique risks associated with AI, such as altered access patterns and expanded attack surfaces.

A Familiar Challenge, Now Applied to AI

The findings of the report underscore that the challenges surrounding AI security stem from foundational gaps rather than a lack of awareness. As AI becomes a core part of enterprise infrastructure, organizations must focus on building expertise and improving their validation of security controls across environments where AI operates. The report emphasizes the need for specialized skills and tools tailored to the AI landscape to enhance security posture effectively.

For a more in-depth understanding of the findings, download the AI and Adversarial Testing Benchmark Report 2026. This report provides critical insights into the current state of AI security and the necessary steps organizations must take to address these challenges.

🔒 Pro insight: The reliance on legacy controls indicates a critical skills gap that could expose organizations to emerging AI threats.

Original article from

The Hacker News

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Veritone Automates PII Removal Process

Veritone has launched a new tool to automate the removal of personal information from data used for AI. This affects organizations needing compliant datasets. Protecting sensitive data is crucial for ethical AI deployment.

Help Net Security·
HIGHAI & Security

AI Security - Jozu Agent Guard Launches for AI Agent Control

Jozu has launched Agent Guard, a new tool to secure AI agents from bypassing controls. This affects organizations using AI technologies without proper security measures. The tool aims to close governance gaps and protect corporate assets effectively.

Help Net Security·
HIGHAI & Security

AI Security - Proofpoint Introduces Intent-Based Detection

Proofpoint has launched AI Security to combat AI-related threats. This solution helps organizations secure AI interactions, addressing urgent security challenges. With increasing AI use, protecting data is critical.

Help Net Security·
MEDIUMAI & Security

AI Security - Enhancing Code Guidance with LLMs Explained

Mark Curphey explores how LLMs can enhance secure coding practices. He stresses the importance of clear documentation and authoritative sources for effective AI training. This conversation sheds light on the future of coding in an AI-driven world.

SC Media·
HIGHAI & Security

Google Cracks Down on Android Apps Abusing Accessibility

Google has tightened restrictions on Android apps using accessibility features. This change aims to curb malware exploitation and enhance user security significantly. Users should enable Advanced Protection Mode for better protection.

Malwarebytes Labs·
HIGHAI & Security

AI Security - Prompt Fuzzing Reveals LLMs' Fragility

Unit 42's latest research reveals that LLMs are vulnerable to prompt fuzzing attacks. This affects organizations using generative AI, risking safety and compliance. It's crucial to strengthen defenses against these evolving threats.

Palo Alto Unit 42·