Google Cloud Vertex AI Vulnerability Exposes Sensitive Data, New Mitigations Recommended

A security flaw in Google Cloud's AI system lets bad actors sneak in and steal sensitive information. Google is now advising companies to change how they set up their AI tools to make them safer.
A vulnerability in Google Cloud's Vertex AI platform could allow attackers to access sensitive data. Google recommends new security measures to mitigate risks.
Artificial intelligence agents are rapidly becoming integral to enterprise workflows, but they also introduce new attack surfaces. Security researchers recently uncovered a significant vulnerability within Google Cloud Platform’s Vertex AI Agent Engine. By exploiting default permission scoping, attackers could weaponize deployed AI agents into 'double agents' that secretly exfiltrate data and compromise cloud infrastructure.
The core issue lies in the default permissions granted to the Per-Project, Per-Product Service Agent (P4SA) associated with deployed AI agents. Researchers built a test agent using the Google Cloud Application Development Kit and discovered they could easily extract the underlying service agent credentials. Using these stolen credentials, an attacker could pivot out of the AI agent’s isolated execution context and infiltrate the broader consumer project. This privilege escalation transforms a helpful AI tool into a dangerous insider threat.
With the compromised identity, an attacker could execute several malicious actions:
- Read all data within consumer Google Cloud Storage buckets.
- Access restricted Google-owned Artifact Registry repositories.
- Download proprietary container images tied to the Vertex AI Reasoning Engine.
- Map internal software supply chains to identify further vulnerabilities.
The compromised credentials also granted access to the Google-managed tenant project dedicated to the agent instance. Within this environment, Palo Alto Networks researchers found sensitive deployment files, including references to internal storage buckets and a Python pickle file. Python’s pickle module is historically insecure for deserializing untrusted data. If an attacker successfully manipulated this file, they could achieve remote code execution to establish a persistent backdoor.
Additionally, the default OAuth 2.0 scopes assigned to the Agent Engine were found to be dangerously permissive. These overly broad scopes could, in theory, extend an attacker’s reach beyond the cloud environment into an organization’s Google Workspace applications. While missing Identity and Access Management permissions prevented immediate access, the wide scopes represented a severe structural security weakness.
Following a responsible disclosure process, Google collaborated with the security researchers to mitigate these threats. Google confirmed that robust controls prevent attackers from altering production base images, blocking potential cross-tenant supply chain attacks. They also updated their official Vertex AI documentation to increase transparency around resource and account usage. To properly secure Vertex Agent Engine deployments, organizations must abandon default configurations. Google now recommends a Bring Your Own Service Account (BYOSA) approach. By replacing the default service agent with a custom account, security teams can strictly enforce the principle of least privilege and grant the AI agent only the exact permissions required to function. This proactive measure aims to significantly reduce the risk of exploitation in future deployments.