AI & SecurityHIGH

AI Security - Addressing Non-Human Identity Risks

SCSC Media
AI agentsidentity securitycloud-nativeAPI tokensCisco
🎯

Basically, AI systems need their own identities, and we must secure them like we do for humans.

Quick Summary

The RSA Conference 2026 addressed the security challenges posed by AI agents. With millions of non-human identities emerging, organizations face new risks. It's essential to adapt security measures to protect these identities effectively.

What Happened

The RSA Conference 2026 highlighted a critical issue in cybersecurity: the security of non-human identities (NHIs). As AI technology evolves, businesses are deploying AI agents at an unprecedented rate. In 2025, there were approximately 28.6 million active AI agents globally, a number expected to surge to over 2.2 billion by 2030. This rapid growth is reshaping the landscape of identity management, introducing unique risks that traditional security measures are ill-equipped to handle.

AI agents operate differently from human users. They often lack clear ownership and are created on-demand, leading to fragmented management processes. This creates vulnerabilities that attackers can exploit, especially since NHIs can outnumber human identities by a staggering 100:1. As highlighted by Cisco’s Jeetu Patel at the conference, traditional security models are breaking down under the weight of these new demands.

Who's Being Targeted

The risks associated with NHIs are not just theoretical; they have real-world implications. A notable incident involved a misconfigured Supabase database that exposed approximately 1.5 million API authentication tokens shortly after the launch of an AI agent social network. Attackers can leverage these exposed tokens to impersonate AI agents, creating significant insider threats, especially if these agents have access to sensitive internal systems like email or Slack.

As organizations increasingly rely on AI and cloud services, the attack surface expands. Security teams must recognize that NHIs represent a high-value target for cybercriminals. This shift necessitates a reevaluation of how identities are managed and secured within modern digital ecosystems.

What Data Was Exposed

The exposure of API tokens is just one example of the vulnerabilities facing organizations today. As AI agents proliferate, the types of data at risk include:

  • Service account credentials with extensive system access
  • Pipeline tokens linked to source code and deployment systems
  • API keys granting access to critical services

These data types are crucial for operational integrity and security. If compromised, they can lead to severe breaches, allowing attackers to manipulate systems and data without detection.

What You Should Do

Organizations must adopt a cloud-native identity security platform that accommodates the unique needs of NHIs. Legacy privileged access management (PAM) tools are inadequate for managing cloud identities at scale. Security practices should evolve to include:

  • Implementing automated rotation of credentials
  • Ensuring visibility across all identity types
  • Establishing least-privilege access controls

Furthermore, data privacy concerns are paramount. A recent study revealed that while 90% of organizations have expanded their privacy programs to include AI, only 12% have mature AI governance. It’s crucial for organizations to prioritize the development of robust identity management strategies that encompass both human and non-human identities to mitigate risks effectively.

🔒 Pro insight: The surge in AI agents necessitates a paradigm shift in identity security, moving beyond traditional PAM tools to address the unique challenges posed by NHIs.

Original article from

SC Media

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Achieving Agentic Outcomes in CyberDefense

Organizations are shifting to AI-driven security models. This change empowers teams to focus on critical tasks while managing growing threats effectively. Understanding this shift is crucial for future cybersecurity strategies.

SC Media·
HIGHAI & Security

AI Security - Hardware-Enforced Solutions Explained

X-PHY's Camellia Chan discusses the need for hardware-enforced security as AI agents become more prevalent. This approach addresses risks of data exfiltration and operational vulnerabilities. Security leaders are encouraged to adopt these measures for safe AI integration.

SC Media·
HIGHAI & Security

AI Security - Understanding Agentic AI's Identity Crisis

Ron Rasin from Silverfort discusses the identity crisis of agentic AI. As AI adoption grows, organizations face increasing identity risks. Understanding these challenges is crucial for effective security.

SC Media·
HIGHAI & Security

AI Security - Autonomous Intelligence Reshapes Digital Trust

AI agents are changing the way enterprises secure their systems. As they act independently, organizations must adapt their trust models. The integrity of digital trust is at stake as we embrace this evolution.

SC Media·
MEDIUMAI & Security

AI Security - Coding Agents Cautious Yet Vulnerable

A new study reveals AI coding models are cautious but still pose software risks. Developers must ground AI in accurate data to reduce vulnerabilities effectively.

SC Media·
HIGHAI & Security

AI Security - How Coding Tools Compromise Defenses

AI coding tools are compromising endpoint security defenses. Organizations are at risk as traditional measures may not withstand these advanced threats. Staying informed and proactive is key.

Dark Reading·