AI Security - Addressing Non-Human Identity Risks
Basically, AI systems need their own identities, and we must secure them like we do for humans.
The RSA Conference 2026 addressed the security challenges posed by AI agents. With millions of non-human identities emerging, organizations face new risks. It's essential to adapt security measures to protect these identities effectively.
What Happened
The RSA Conference 2026 highlighted a critical issue in cybersecurity: the security of non-human identities (NHIs). As AI technology evolves, businesses are deploying AI agents at an unprecedented rate. In 2025, there were approximately 28.6 million active AI agents globally, a number expected to surge to over 2.2 billion by 2030. This rapid growth is reshaping the landscape of identity management, introducing unique risks that traditional security measures are ill-equipped to handle.
AI agents operate differently from human users. They often lack clear ownership and are created on-demand, leading to fragmented management processes. This creates vulnerabilities that attackers can exploit, especially since NHIs can outnumber human identities by a staggering 100:1. As highlighted by Cisco’s Jeetu Patel at the conference, traditional security models are breaking down under the weight of these new demands.
Who's Being Targeted
The risks associated with NHIs are not just theoretical; they have real-world implications. A notable incident involved a misconfigured Supabase database that exposed approximately 1.5 million API authentication tokens shortly after the launch of an AI agent social network. Attackers can leverage these exposed tokens to impersonate AI agents, creating significant insider threats, especially if these agents have access to sensitive internal systems like email or Slack.
As organizations increasingly rely on AI and cloud services, the attack surface expands. Security teams must recognize that NHIs represent a high-value target for cybercriminals. This shift necessitates a reevaluation of how identities are managed and secured within modern digital ecosystems.
What Data Was Exposed
The exposure of API tokens is just one example of the vulnerabilities facing organizations today. As AI agents proliferate, the types of data at risk include:
- Service account credentials with extensive system access
- Pipeline tokens linked to source code and deployment systems
- API keys granting access to critical services
These data types are crucial for operational integrity and security. If compromised, they can lead to severe breaches, allowing attackers to manipulate systems and data without detection.
What You Should Do
Organizations must adopt a cloud-native identity security platform that accommodates the unique needs of NHIs. Legacy privileged access management (PAM) tools are inadequate for managing cloud identities at scale. Security practices should evolve to include:
- Implementing automated rotation of credentials
- Ensuring visibility across all identity types
- Establishing least-privilege access controls
Furthermore, data privacy concerns are paramount. A recent study revealed that while 90% of organizations have expanded their privacy programs to include AI, only 12% have mature AI governance. It’s crucial for organizations to prioritize the development of robust identity management strategies that encompass both human and non-human identities to mitigate risks effectively.
SC Media