AI & SecurityHIGH

AI Security - Managing Unmanaged Cyber Risks Explained

TETenable Blog
AI governanceTenable ResearchChatGPTGoogle Geminiexposure management
🎯

Basically, AI tools can create new security risks if not properly managed.

Quick Summary

AI's rapid deployment is creating new cyber risks. Organizations must address vulnerabilities in AI tools to protect sensitive data. Unified exposure management is key to securing their environments.

What Happened

AI technology is advancing at breakneck speed, but this rapid evolution is opening up new avenues for cyber attacks. Cybersecurity teams are now faced with the challenge of managing vulnerabilities associated with AI tools that lack proper governance. The risks are significant, as AI models have transformed from isolated targets into potential attack vectors that can compromise sensitive data and workflows.

Organizations have prioritized speed in deploying AI, often overlooking the necessary security controls. This gap has led to high-risk environments characterized by over-privileged access and vulnerable software supply chains. As AI becomes integral to core business operations, the need for effective governance and exposure management has never been more critical.

Who's Being Targeted

Recent findings from Tenable Research highlight the vulnerabilities present in popular AI models like OpenAI's ChatGPT and Google's Gemini. These vulnerabilities include indirect prompt injection and privacy risks that can be exploited by attackers to extract sensitive information. The research shows that AI systems are not just targets; they can actively facilitate attacks by connecting to sensitive data and cloud services.

Moreover, as organizations increasingly rely on AI for their daily operations, the risk landscape expands. Approximately 70% of organizations now incorporate AI into their production cloud stack, with employee access to AI tools rising significantly. This rapid adoption, however, has not been matched by adequate security measures, leaving organizations exposed to potential breaches.

Tactics & Techniques

The vulnerabilities in AI tools create multiple attack paths that can be exploited by threat actors. For instance, Tenable's research uncovered critical flaws in the Google Looker Studio, which could allow attackers to expose or manipulate sensitive cloud data. This illustrates how interconnected AI systems can widen the attack surface, turning AI into a potential weapon against organizations.

Additionally, many organizations face challenges with governance, as AI tools are often operationalized faster than security teams can assess the associated risks. This creates a situation where overprivileged identities and inactive accounts can be exploited, further complicating the security landscape.

Defensive Measures

To mitigate these risks, organizations must adopt a unified exposure management strategy. This approach enables security teams to gain visibility into the complex web of interactions, identities, and permissions surrounding AI tools. By moving away from siloed security dashboards, organizations can better understand how AI systems interact with their IT environments and identify potential vulnerabilities.

Effective exposure management allows organizations to proactively address the highest-risk cyber exposures, ensuring that they can secure their sensitive data and systems against evolving threats. As AI continues to integrate into business operations, prioritizing comprehensive governance and risk management will be essential to safeguarding against cyber risks.

🔒 Pro insight: The rapid integration of AI into workflows necessitates immediate governance to prevent exploitations of newly identified vulnerabilities.

Original article from

Tenable Blog · Ari Eitan

Read Full Article

Related Pings

HIGHAI & Security

AI Security - Varonis Atlas Enhances Data Protection

Varonis Atlas has launched to secure AI systems and the sensitive data they access. This is crucial as organizations increasingly rely on AI, which can pose significant risks. With comprehensive visibility and control, Varonis Atlas helps organizations manage these risks effectively.

BleepingComputer·
MEDIUMAI & Security

AI Security - Insights from NIST Cyber AI Profile Workshop

NIST's recent workshop on the Cyber AI Profile gathered valuable insights on AI governance and cybersecurity. Participants emphasized the need for clear guidelines and effective risk management strategies. This feedback will shape future drafts and enhance AI security practices.

NIST Cybersecurity Blog·
HIGHAI & Security

AI Security - Apiiro Introduces Threat Modeling Solution

Apiiro has launched AI Threat Modeling to identify risks before code exists. This innovative tool helps organizations manage security in AI-driven applications effectively.

Help Net Security·
HIGHAI & Security

AI Security - Straiker Enhances Protection for AI Agents

Straiker has launched new AI security tools to protect coding and productivity agents. Organizations using these agents face serious risks without proper oversight. Discover AI and Defend AI help security teams monitor and secure their AI environments effectively.

Help Net Security·
HIGHAI & Security

AI Security - Astrix Expands Agent Governance Platform

Astrix Security has expanded its AI agent security platform to cover all enterprise AI agents. This enhancement is crucial for managing both sanctioned and shadow agents effectively. With the rapid deployment of AI, enterprises face significant risks without proper governance. Astrix aims to fill this gap with real-time monitoring and policy enforcement.

Help Net Security·
HIGHAI & Security

AI Security - Rubrik SAGE Enhances Governance for Agents

Rubrik has launched SAGE, a new AI governance engine. It enables real-time control of AI agents, addressing governance bottlenecks. This innovation is crucial for secure enterprise AI deployment.

Help Net Security·