AI Services Hacked - 6 Ways Attackers Exploit Them

Basically, hackers are using AI tools to launch attacks on businesses instead of traditional malware.
Cybercriminals are exploiting AI tools to launch sophisticated attacks on businesses. This trend poses serious risks, as attackers leverage vulnerabilities in AI services like Claude and OpenClaw. Companies must enhance their security measures to combat these emerging threats.
What Happened
As businesses increasingly depend on AI technologies, attackers are adapting their strategies to exploit these systems. Cybercriminals are now using AI tools in ways similar to how they once relied on built-in enterprise tools like PowerShell. This trend, referred to as 'living off the AI land,' allows attackers to leverage legitimate AI capabilities for malicious purposes.
How Attackers Exploit AI
Experts have identified various methods attackers employ to abuse AI services:
MCP Server Impersonation
In September 2025, a counterfeit Model Context Protocol (MCP) server was created to mimic legitimate technology. This fake server was integrated into AI assistants and functioned normally until a malicious code change was introduced. Sensitive communications were siphoned off for days before detection, exposing enterprises to supply chain attacks.
Covert Command-and-Control Channels
Attackers are also using AI platforms as covert command-and-control (C2) channels. By disguising malicious traffic within legitimate AI service data, they can bypass traditional security measures. For instance, the SesameOp backdoor hid command traffic within the OpenAI Assistants API, masking malicious instructions as normal activity.
Dependency Poisoning
Some attacks focus on poisoning downstream dependencies that AI agents rely on for data processing. A compromised NPM package injected into an agent's workflow can alter decision-making processes without any visible anomalies, similar to classical supply chain attacks.
Double Agents
Attackers are weaponizing vulnerabilities within AI agents. For example, the EchoLeak command injection vulnerability in Microsoft 365 Copilot (CVE-2025-32711) allowed attackers to exfiltrate internal files via a single email. Additionally, vulnerabilities in OpenClaw enabled malicious websites to take control of AI agents, with thousands of instances detected.
AI-Orchestrated Espionage
In a notable case, a suspected Chinese state-sponsored group utilized Claude Code for cyber-espionage. By automating tactical operations, they managed a significant portion of their campaign using AI, highlighting the potential for AI to facilitate large-scale attacks.
Modular Black-Hat AI Platforms
The emergence of dedicated offensive AI platforms, such as Xanthorox AI, represents a shift in the threat landscape. These platforms are specifically designed for cybercrime, featuring modules for malware generation and vulnerability exploitation, moving beyond traditional hacking methods.
What This Means for Businesses
As attackers increasingly exploit AI systems, organizations must treat AI tools with the same caution as human users. Implementing tight controls and specific monitoring is essential to mitigate risks. Security teams should never assume that AI systems are inherently safe, as the trust placed in these technologies can be easily exploited by malicious actors.