SentinelOne AI EDR Stops Anthropic's Zero-Day Attack

Basically, SentinelOne's AI stopped a dangerous attack before it could do any harm.
SentinelOne's AI EDR thwarted a global LiteLLM supply chain attack before it could execute. This incident highlights the risks of AI tools with unrestricted permissions, emphasizing the need for robust security measures. Organizations must reassess their AI governance to prevent similar threats.
What Happened
On March 24, 2026, SentinelOne's autonomous detection system identified and halted a zero-day supply chain attack targeting LiteLLM, a widely used proxy layer for large language model API calls. This attack was executed by a group known as TeamPCP, who compromised a trusted open-source security tool to deliver malicious updates to LiteLLM. The attack was sophisticated, utilizing a trojaned version of LiteLLM that attempted to execute malicious Python code across multiple environments. Remarkably, SentinelOne's AI EDR detected and blocked the attack before any damage occurred, showcasing the power of AI-driven security in real-time threat mitigation.
Who's Affected
The attack had the potential to impact numerous organizations using LiteLLM for their AI applications. Since LiteLLM is a popular tool among developers, many could have unknowingly installed the compromised version, leading to significant data theft and operational disruptions. The incident underscores the risks associated with supply chain vulnerabilities, especially when trusted tools are exploited to distribute malicious payloads. Organizations relying on AI coding assistants with unrestricted permissions are particularly vulnerable, as these tools can autonomously update and execute compromised packages without human oversight.
What Data Was Exposed
The compromised LiteLLM versions were designed to execute malicious code that could harvest sensitive information, including user credentials, cryptocurrency wallets, and cloud access tokens. The attack was structured in multiple stages, with the initial payload designed to establish persistence and lateral movement within affected systems. Once inside, the malware could create privileged pods in Kubernetes clusters, gaining root access and enabling further exploitation. This multi-stage approach allowed attackers to exfiltrate data while evading traditional security measures.
What You Should Do
Organizations must reassess their security policies regarding AI tools and coding assistants. Implementing strict governance around AI agent permissions is crucial to mitigate risks. Regularly audit and monitor all software dependencies, especially those sourced from open-source repositories. Employ behavioral detection tools that can identify malicious activities at the process level, regardless of how the malicious package is introduced. By closing the gap between attack speed and detection capabilities, organizations can better protect themselves against emerging threats in the evolving landscape of supply chain attacks.