AI Security - Navigating the Runtime Challenges Ahead
Basically, AI agents can make mistakes that affect company systems, so we need to monitor them closely.
AI agents are becoming common in enterprises, but their mistakes can be costly. From deleted inboxes to service outages, the risks are real. Security leaders must adapt to monitor these agents effectively.
What Happened
AI agents are now integral to enterprise networks, performing tasks like writing code and managing emails. However, they can also make significant errors. For instance, an AI assistant at Meta mistakenly deleted an employee's inbox, while an Amazon agent caused a service outage by rebuilding a deployment environment. These incidents highlight a critical shift in security as autonomous software operates with real permissions and consequences.
Security experts are now emphasizing the need for runtime security, which involves continuously monitoring AI agents' behavior after deployment. Joe Sullivan, a former CISO, compares AI agents to teenagers: they have extensive access but lack judgment. This new focus on runtime security is essential as traditional methods of securing AI have primarily concentrated on prevention before deployment.
Why Agents Change the Security Model
CISOs have historically focused on managing human behavior within enterprise networks using identity management, access controls, and user behavior analytics. However, the rise of AI agents complicates this model. These agents often bypass traditional security checkpoints, operating through API calls and generating significantly more activity than a human employee.
For example, while a typical employee might produce 50 to 100 log events in two hours, an AI agent can generate 10 to 20 times that amount. Moreover, not all AI platforms provide logs, making it difficult for security teams to track their activities. This lack of visibility poses a significant challenge when trying to monitor agent behavior effectively.
What Runtime Monitoring Looks Like
Once organizations identify their AI agents, they must determine what behaviors to monitor. Existing endpoint detection and response (EDR) tools can be instrumental in tracking AI agents. These tools capture detailed information about application behavior, including network connections and file interactions.
CrowdStrike’s EDR technology, for instance, creates a threat graph that maps behaviors back to their origins. This capability allows security teams to apply different policies to known agent applications compared to those operated by humans. By doing so, organizations can better manage the risks associated with AI agents and their actions.
What CISOs Should Do Now
The shift towards runtime security requires CISOs to adopt a new mindset regarding AI risk. It’s not just about how agents are built, but also how they behave in real-time within enterprise systems. This approach necessitates a systematic extension of security practices to encompass this new category of actors.
CISOs should start by creating a structured inventory of AI agents in use, utilizing specialized tools for agent discovery. This foundational step will enable effective monitoring and risk management. Additionally, integrating runtime monitoring with existing security measures will create a comprehensive defense strategy, ensuring that organizations can respond swiftly to any unexpected actions by AI agents.
CSO Online