AI & SecurityHIGH

AI Security - Cybersecurity Staff Unprepared for Attacks

IMInfosecurity Magazine
ISACAAI securitycyber-attackcybersecurity professionalsenterprise AI
🎯

Basically, many cybersecurity workers don't know how fast they can stop AI attacks.

Quick Summary

A new ISACA survey shows that most cybersecurity staff are unsure how quickly they can respond to AI cyber-attacks. This knowledge gap poses serious risks for organizations relying on AI. It's crucial for companies to establish clear governance and training to improve their response capabilities.

What Happened

A recent survey conducted by ISACA has unveiled a concerning gap in the preparedness of cybersecurity professionals regarding AI systems. Over 56% of IT and cybersecurity staff reported they are unsure how quickly they could shut down AI systems during a cyber-attack. This uncertainty could lead to severe consequences in the event of an actual incident. While 32% believed they could respond within an hour, 7% thought it would take longer than that. This lack of confidence highlights a critical vulnerability in organizations that increasingly rely on AI technology.

Who's Affected

The survey included responses from over 3,400 security and digital professionals, revealing a widespread issue across various organizations. The confusion extends beyond just response times; about 20% of respondents did not know who is responsible for managing AI applications within their enterprises. This ambiguity can hinder effective incident response and governance, leaving organizations vulnerable to attacks.

What Data Was Exposed

The survey results indicate that many organizations lack proper oversight of their AI systems. Only 36% of respondents stated that human approval is required for most AI actions. Alarmingly, 20% admitted they did not know the role humans play in overseeing AI decisions. This lack of clarity can lead to significant risks, as organizations may struggle to identify and mitigate security issues related to AI effectively.

What You Should Do

Organizations must prioritize establishing clear governance and oversight for their AI systems. It is crucial to define roles and responsibilities regarding AI management to enhance incident response capabilities. Security professionals should advocate for stronger policies and processes to ensure effective AI usage while minimizing risks. As Jenai Marinkovic, a vCISO and CTO, emphasizes, having the right guardrails in place is essential for leveraging AI technology responsibly. Organizations should also conduct regular training and simulations to prepare their teams for potential AI-related incidents, ensuring they can act swiftly and effectively when necessary.

🔒 Pro insight: The survey highlights a critical gap in AI governance; organizations must implement clear protocols to enhance incident response times.

Original article from

Infosecurity Magazine

Read Full Article

Related Pings

HIGHAI & Security

AI Security - Introducing Agent Security for Governance

Snyk has launched Agent Security to help organizations govern AI agents effectively. This new tool aims to tackle the challenges of Shadow AI, ensuring safe behavior from development to deployment. With the rise of AI in software, understanding and managing these risks is crucial for all businesses.

Snyk Blog·
MEDIUMAI & Security

AI-Security - GitHub Expands Application Coverage with AI

GitHub is enhancing application security with AI-powered detections. This upgrade will help developers identify vulnerabilities across various languages, improving security workflows. Early testing shows promising results, making it easier to catch and fix risks early in the development process.

GitHub Security Blog·
MEDIUMAI & Security

AI Security - Creating with Sora Safely Explained

Sora 2 and the Sora app prioritize user safety in social creation. With advanced protections, they address new AI security challenges. This innovation aims to create a secure environment for all users.

OpenAI News·
HIGHAI & Security

AI Security - Google Launches Gemini Agents on Dark Web

Google has launched Gemini AI agents to monitor the dark web, analyzing millions of posts daily. This tool helps organizations detect relevant threats with high accuracy. As companies adopt this technology, they must remain vigilant about potential misuse and privacy concerns.

The Register Security·
HIGHAI & Security

AI in Financial Crime Compliance - Transforming the Landscape

AI is revolutionizing financial crime compliance by enhancing KYC and AML processes. As illicit transactions rise, institutions must adapt to avoid penalties. The future of compliance is here, driven by AI.

SC Media·
HIGHAI & Security

AI Security - Varonis Atlas Enhances Data Protection

Varonis Atlas has launched to secure AI systems and the sensitive data they access. This is crucial as organizations increasingly rely on AI, which can pose significant risks. With comprehensive visibility and control, Varonis Atlas helps organizations manage these risks effectively.

BleepingComputer·