🎯Imagine if your online assistant could be tricked into running dangerous commands just by changing the title of a document. That's what's happening with Claude Code and other AI tools, making it really important for users to stay alert and secure their projects.
What Happened
A recent discovery has sent shockwaves through the tech community. Multiple security vulnerabilities have been found in Anthropic's Claude Code, an AI-driven coding assistant. These flaws could potentially allow hackers to execute code remotely and steal sensitive API credentials. Additionally, a high-severity security bypass vulnerability has been identified, allowing malicious actors to silently evade user-configured deny rules through a command-padding technique. Compounding the issue, Anthropic accidentally leaked the entire source code of Claude Code, exposing over 512,000 lines of code, which could further amplify the risks associated with these vulnerabilities.
The vulnerabilities arise from various configuration mechanisms, such as Hooks, Model Context Protocol (MCP) servers, and environment variables. These weaknesses create a gateway for attackers, enabling them to manipulate the system and gain unauthorized access to sensitive information. Exploiting these flaws could lead to severe consequences for users and companies relying on Claude Code.
New Threat Campaigns
In addition to the vulnerabilities, two significant threat campaigns have been identified that specifically target Claude Code users. The EvilTokens campaign utilizes Microsoft’s OAuth authentication flow to hijack enterprise accounts without needing to steal passwords. Attackers trick victims into authorizing their sessions through legitimate Microsoft infrastructure, making traditional phishing detection methods largely ineffective.
Simultaneously, the ClickFix campaign targets macOS developers using AI coding tools like Claude Code. Attackers redirect victims from Google Ads to counterfeit documentation pages, instructing them to run malicious terminal commands. This multi-stage attack not only compromises the system but also establishes a persistent backdoor, giving attackers real-time access to sensitive information.
New Vulnerabilities: Comment and Control
Further complicating the security landscape, a critical cross-vendor vulnerability class dubbed “Comment and Control” has been identified. This new category of prompt injection attacks weaponizes GitHub pull request titles and issue comments to hijack AI coding agents, including Claude Code, Google’s Gemini CLI, and GitHub Copilot, leading to the potential theft of API keys and access tokens directly from CI/CD environments.
The attack operates within GitHub itself, where an attacker can write a malicious pull request title or issue comment that the AI agent processes as trusted context, executing attacker-supplied instructions. For instance, in Claude Code, the PR title is interpolated into the agent’s prompt without any sanitization, allowing attackers to execute commands that can exfiltrate sensitive credentials.
New Initiative: Project Glasswing
In light of these vulnerabilities and emerging threats, Anthropic has launched Project Glasswing, a cybersecurity initiative that utilizes its new frontier model, Claude Mythos, to find and address security vulnerabilities across major systems. This initiative involves collaboration with organizations such as Amazon Web Services, Apple, Google, and Microsoft, among others, to enhance the security of critical software. Anthropic's Claude Mythos has already demonstrated its ability to discover thousands of high-severity zero-day vulnerabilities, showcasing its potential to significantly improve cybersecurity measures.
Why Should You Care
If you use Claude Code, your projects might be at risk. Imagine a thief breaking into your house and stealing your most valuable possessions — that’s what could happen if these vulnerabilities are exploited. Hackers could run malicious code on your systems, potentially leading to data breaches or financial loss.
Moreover, if your API keys are stolen, it could give attackers access to your applications and data, leading to further exploitation. The recent source code leak exacerbates these risks, as malicious actors now have direct access to the underlying code, making it easier to identify and exploit vulnerabilities. Protecting your sensitive information is crucial, and being aware of these vulnerabilities is the first step.
What's Being Done
Anthropic is aware of these vulnerabilities and is actively working on patches to fix the issues. Users should take immediate action to protect themselves. Here are some steps to consider:
- Update your Claude Code to the latest version as soon as patches are available.
- Review your configuration settings to ensure they are secure.
- Monitor your systems for any unusual activity. Experts are keeping a close eye on how quickly Anthropic can roll out these fixes and whether any attackers will exploit these vulnerabilities before they are patched. The source code leak has put additional pressure on Anthropic to address these vulnerabilities swiftly.
Technical Details
The newly discovered bypass vulnerability stems from a performance optimization in the bashPermissions.ts file, which limits security analysis to 50 entries per subcommand. If a command exceeds this threshold, the deny rules are skipped, allowing malicious commands to execute without any warning. This flaw could lead to the exfiltration of sensitive credentials, including SSH keys and API tokens.
The attack can be initiated by publishing a legitimate-looking GitHub repository that includes a CLAUDE.md file with a command pipeline that exceeds the 50-command limit, effectively bypassing security measures. With the source code now publicly available, attackers may find it easier to develop exploits targeting these vulnerabilities.
Patch Status
Anthropic has reportedly addressed the issue in Claude Code v2.1.90, but the fix involves applying a secure implementation that has yet to be deployed to public builds. Users are advised to audit CLAUDE.md files in any cloned repositories and treat deny rules as unreliable until the patch is fully implemented. The urgency to deploy these patches has increased significantly due to the source code leak, which could lead to more sophisticated attacks.
Broader Implications
The leak of Claude Code's source has raised concerns about the security landscape in AI-driven development. Experts warn that as AI coding assistants become more prevalent, the attack surface is expanding. Developers now have access to vast amounts of code, which can be exploited if proper security measures are not in place. The ease of prompting AI agents to find and exploit vulnerabilities underscores the need for a robust security framework that goes beyond traditional vulnerability management models. This situation highlights the importance of secure design principles in the development process, as the integration of AI tools can inadvertently introduce new risks.
The emergence of the 'Comment and Control' vulnerability class highlights a critical need for AI developers to prioritize security in their coding environments, especially in CI/CD pipelines where automation can inadvertently lead to severe exploits.





