AI Security - Cybersecurity Staff Unprepared for Attacks
Basically, many cybersecurity workers don't know how fast they can stop AI attacks.
A new ISACA survey shows that most cybersecurity staff are unsure how quickly they can respond to AI cyber-attacks. This knowledge gap poses serious risks for organizations relying on AI. It's crucial for companies to establish clear governance and training to improve their response capabilities.
What Happened
A recent survey conducted by ISACA has unveiled a concerning gap in the preparedness of cybersecurity professionals regarding AI systems. Over 56% of IT and cybersecurity staff reported they are unsure how quickly they could shut down AI systems during a cyber-attack. This uncertainty could lead to severe consequences in the event of an actual incident. While 32% believed they could respond within an hour, 7% thought it would take longer than that. This lack of confidence highlights a critical vulnerability in organizations that increasingly rely on AI technology.
Who's Affected
The survey included responses from over 3,400 security and digital professionals, revealing a widespread issue across various organizations. The confusion extends beyond just response times; about 20% of respondents did not know who is responsible for managing AI applications within their enterprises. This ambiguity can hinder effective incident response and governance, leaving organizations vulnerable to attacks.
What Data Was Exposed
The survey results indicate that many organizations lack proper oversight of their AI systems. Only 36% of respondents stated that human approval is required for most AI actions. Alarmingly, 20% admitted they did not know the role humans play in overseeing AI decisions. This lack of clarity can lead to significant risks, as organizations may struggle to identify and mitigate security issues related to AI effectively.
What You Should Do
Organizations must prioritize establishing clear governance and oversight for their AI systems. It is crucial to define roles and responsibilities regarding AI management to enhance incident response capabilities. Security professionals should advocate for stronger policies and processes to ensure effective AI usage while minimizing risks. As Jenai Marinkovic, a vCISO and CTO, emphasizes, having the right guardrails in place is essential for leveraging AI technology responsibly. Organizations should also conduct regular training and simulations to prepare their teams for potential AI-related incidents, ensuring they can act swiftly and effectively when necessary.
Infosecurity Magazine