AI & SecurityHIGH

AI Security - Navigating the Runtime Challenges Ahead

🎯

Basically, AI agents can make mistakes that affect company systems, so we need to monitor them closely.

Quick Summary

AI agents are becoming common in enterprises, but their mistakes can be costly. From deleted inboxes to service outages, the risks are real. Security leaders must adapt to monitor these agents effectively.

What Happened

AI agents are now integral to enterprise networks, performing tasks like writing code and managing emails. However, they can also make significant errors. For instance, an AI assistant at Meta mistakenly deleted an employee's inbox, while an Amazon agent caused a service outage by rebuilding a deployment environment. These incidents highlight a critical shift in security as autonomous software operates with real permissions and consequences.

Security experts are now emphasizing the need for runtime security, which involves continuously monitoring AI agents' behavior after deployment. Joe Sullivan, a former CISO, compares AI agents to teenagers: they have extensive access but lack judgment. This new focus on runtime security is essential as traditional methods of securing AI have primarily concentrated on prevention before deployment.

Why Agents Change the Security Model

CISOs have historically focused on managing human behavior within enterprise networks using identity management, access controls, and user behavior analytics. However, the rise of AI agents complicates this model. These agents often bypass traditional security checkpoints, operating through API calls and generating significantly more activity than a human employee.

For example, while a typical employee might produce 50 to 100 log events in two hours, an AI agent can generate 10 to 20 times that amount. Moreover, not all AI platforms provide logs, making it difficult for security teams to track their activities. This lack of visibility poses a significant challenge when trying to monitor agent behavior effectively.

What Runtime Monitoring Looks Like

Once organizations identify their AI agents, they must determine what behaviors to monitor. Existing endpoint detection and response (EDR) tools can be instrumental in tracking AI agents. These tools capture detailed information about application behavior, including network connections and file interactions.

CrowdStrike’s EDR technology, for instance, creates a threat graph that maps behaviors back to their origins. This capability allows security teams to apply different policies to known agent applications compared to those operated by humans. By doing so, organizations can better manage the risks associated with AI agents and their actions.

What CISOs Should Do Now

The shift towards runtime security requires CISOs to adopt a new mindset regarding AI risk. It’s not just about how agents are built, but also how they behave in real-time within enterprise systems. This approach necessitates a systematic extension of security practices to encompass this new category of actors.

CISOs should start by creating a structured inventory of AI agents in use, utilizing specialized tools for agent discovery. This foundational step will enable effective monitoring and risk management. Additionally, integrating runtime monitoring with existing security measures will create a comprehensive defense strategy, ensuring that organizations can respond swiftly to any unexpected actions by AI agents.

🔒 Pro insight: As AI agents proliferate, traditional security frameworks must evolve to incorporate real-time monitoring and behavioral analysis for effective risk management.

Original article from

CSO Online

Read Full Article

Related Pings

HIGHAI & Security

AI Security - Proofpoint Launches New Intent-Based Solution

Proofpoint has launched a new AI security solution to protect enterprise AI agents. This framework addresses the growing risks associated with autonomous AI operations. Organizations can now implement better governance and security measures to safeguard their data and operations.

Proofpoint Threat Insight·
HIGHAI & Security

AI Security - Hidden Instructions in README Files Exposed

New research reveals a significant security risk in AI coding agents. Hidden instructions in README files can lead to data leaks, affecting developers' sensitive information. It's crucial to understand and mitigate these vulnerabilities to protect your projects.

Help Net Security·
MEDIUMAI & Security

AI Security - Gartner Proposes Friday Copilot Ban Alert

What Happened Gartner analyst Dennis Xu recently proposed an unconventional idea: banning the use of Microsoft’s Copilot AI on Friday afternoons. This suggestion stems from concerns that users may be too fatigued at the end of the week to adequately verify the AI's output. Xu raised this point during his talk at the Security & Risk Management Summit in

The Register Security·
HIGHAI & Security

AI Security - Securing Autonomous Agents with TrendAI & NVIDIA

TrendAI and NVIDIA OpenShell are securing autonomous AI agents. This partnership aims to enhance governance and risk visibility for enterprise AI systems. As AI evolves, so does the need for robust security measures.

Trend Micro Research·
HIGHAI & Security

AI Security - Bank Develops Own Threat Hunting Agent

Commonwealth Bank has developed its own AI threat hunting tool to tackle rising cyber threats. Traditional vendors couldn't keep up, prompting this innovation. The new system drastically improves response times, enhancing overall security.

The Register Security·
MEDIUMAI & Security

AI Security Startups - Bold and Onyx Launch with $40M Each

Bold Security and Onyx Security have launched with $40 million each to tackle AI-related security risks. Their innovative solutions aim to enhance enterprise protection. This funding reflects the growing importance of AI security in today's digital landscape.

SC Media·