
🎯Basically, hackers can trick AI agents into leaking sensitive data.
What Happened
Security researchers from Capsule Security have discovered significant prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce. These vulnerabilities allow attackers to inject malicious commands via seemingly harmless prompts, leading to potential data exfiltration.
The Flaw
In Microsoft Copilot, the issue, dubbed ShareLeak, arises from how Copilot processes SharePoint forms. Attackers can insert a manipulated payload into standard form fields, such as comments. This payload is then executed by the AI agent as if it were a legitimate command. As a result, sensitive customer data can be accessed and exfiltrated, even when Microsoft’s security mechanisms flag suspicious behavior.
On the other hand, Salesforce Agentforce allows malicious instructions to be embedded in publicly accessible lead forms. When an internal user instructs the agent to process these leads, the agent executes the harmful commands, potentially leading to unauthorized data disclosures.
Who's Affected
Both Microsoft and Salesforce users are at risk. The vulnerabilities can lead to the exposure of sensitive business and customer data, impacting organizations that rely on these AI agents for workflow optimization.
What Data Was Exposed
The data at risk includes sensitive customer information stored in SharePoint and CRM databases. Attackers could potentially access multiple lead records simultaneously, making the impact even more severe.
Patch Status
Microsoft has released a patch addressing the Copilot vulnerability, which has a CVSS score of 7.5, indicating a high severity level. However, Salesforce has classified the issue as configuration-specific and suggested optional controls, which Capsule Security experts argue undermines the purpose of autonomous agents.
Immediate Actions
Organizations using these platforms should: By taking these steps, companies can better protect themselves against potential data breaches stemming from these vulnerabilities.
Containment
- 1.Treat all external inputs as untrusted.
- 2.Implement strict input validation to separate data from commands.
Remediation
- 3.Enforce least-privilege access controls.
- 4.Establish rigorous monitoring for outgoing communications.
🔒 Pro insight: The identified vulnerabilities highlight the critical need for robust input validation in AI-driven applications to prevent data exfiltration.



