AI & SecurityMEDIUM

AI Security - Entro Launches Governance for AI Agents

🎯

Basically, Entro Security created a tool to help companies control AI access better.

Quick Summary

Entro Security has launched a new governance tool for AI agents. This solution helps organizations manage AI access effectively, addressing security challenges. With AGA, security teams can regain control and visibility over AI activities.

What Happened

Entro Security has unveiled a new feature called Agentic Governance & Administration (AGA), aimed at helping organizations manage AI agents and their access across various enterprise systems. As the use of AI assistants and agent platforms becomes more prevalent, the need for effective governance has never been more critical. AGA seeks to address this by returning to fundamental principles such as inventory, ownership, least privilege, auditability, and enforcement.

CEO Itzik Alvas emphasized that enterprise AI adoption often begins without a formal strategy. Instead, it starts with simple connections, like integrating a tool with a large language model (LLM) or authenticating an agent against platforms like SharePoint or Salesforce. This rapid spread of AI tools often leaves security teams scrambling to answer critical questions about who is connecting to what systems and with what permissions.

Who's Affected

Organizations that are accelerating their adoption of AI technologies will find AGA particularly beneficial. As AI agents become commonplace, security and identity teams face challenges in tracking and managing these agents effectively. With AGA, these teams can regain clarity and control over AI access, ensuring that they are aware of what agents are operating within their systems and the permissions they hold.

The introduction of AGA is timely, as many companies are integrating AI into their workflows without fully understanding the implications for security and governance. This tool aims to bridge that gap, providing a structured approach to managing AI access and ensuring compliance with security protocols.

What Data Was Exposed

While the article does not specify any data exposure incidents, it highlights the risks associated with Shadow AI—the unmonitored use of AI tools that can operate outside traditional security frameworks. AGA addresses this by providing visibility into the AI agents that are in use, including their access paths and the identities that power them. This structured oversight helps organizations identify potential vulnerabilities before they can be exploited.

AGA builds a comprehensive profile of AI agents by analyzing three layers: sources, targets, and identities. This approach allows organizations to keep track of where AI agents are running, what they can access, and which identities are involved in those interactions.

What You Should Do

Organizations should consider implementing AGA to enhance their AI governance strategies. This tool not only helps in discovering existing AI agents but also in monitoring their activities and enforcing policies around their use. By leveraging AGA, security teams can ensure that AI access is managed effectively and that any unauthorized activities are promptly addressed.

In addition, organizations should conduct regular audits of their AI systems to ensure compliance with security policies. This proactive approach will help mitigate risks associated with AI-driven access and maintain a secure environment as AI technologies continue to evolve.

🔒 Pro insight: AGA's structured approach to AI governance is essential as organizations increasingly rely on AI agents for critical operations.

Original article from

Help Net Security · Industry News

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Discern Deploys Six Agents for Analysis

Discern Security has launched six AI agents to streamline security analysis and remediation. These tools help teams prioritize tasks and reduce risks. This innovation is essential for navigating complex security environments effectively.

Help Net Security·
MEDIUMAI & Security

AI Security - Teleport Launches Beams for Agentic AI

Teleport has announced Beams, a new runtime to enhance security for AI agents. This innovation simplifies IAM challenges, making it easier for teams to deploy AI safely. With Beams, organizations can innovate without compromising security. Learn how this will impact your AI workflows.

Help Net Security·
HIGHAI & Security

AI Security - Ceros Enhances Control Over Claude Code

Ceros empowers security teams with visibility over Claude Code, an AI coding agent. This tool addresses security gaps, ensuring compliance and protecting sensitive data. Organizations can now monitor AI actions effectively.

The Hacker News·
HIGHAI & Security

AI Security - Arcjet Introduces Inline Defense Against Attacks

Arcjet has launched a new tool to stop prompt injection attacks on AI systems. This capability helps developers block malicious requests before they reach AI models. With AI security becoming increasingly important, this tool is a game-changer for companies deploying AI technologies.

Help Net Security·
MEDIUMAI & Security

AI Security - Dashlane Unveils Omnix AI Advisor for Teams

Dashlane has launched the Omnix AI Advisor, enhancing credential risk management for security teams. This AI tool translates complex data into actionable insights, improving proactive security. It's a game-changer in managing credential threats effectively.

Help Net Security·
HIGHAI & Security

AI Security - Addressing High Confidence Errors in Models

AI models can confidently provide wrong answers, raising serious concerns. Christian Debes discusses the implications for organizations and the need for accountability. It's crucial to address these gaps to ensure responsible AI use.

Help Net Security·