AI & SecurityHIGH

LLMs Breaking Access Control - Hidden Risks Uncovered

Featured image for LLMs Breaking Access Control - Hidden Risks Uncovered
SWSecurityWeek
LLMsaccess controlpolicy as codeVatsal Guptasecurity flaws
🎯

Basically, AI can create security rules that might let too many people in by mistake.

Quick Summary

AI-generated access control policies can introduce serious security flaws. Organizations may unknowingly grant excessive permissions, risking their security. It's crucial to validate these policies before deployment.

What Happened

In recent discussions, Vatsal Gupta, a senior security engineer at Apple, highlighted a critical issue with the use of Large Language Models (LLMs) in generating organizational access control policies. As businesses increasingly adopt policy as code, LLMs are employed to write complex code in languages like Rego and Cedar. This shift aims to enhance efficiency, but it introduces significant risks. LLMs can produce policies that appear valid but contain hidden flaws, potentially undermining the organization's security model.

The problem lies in the semantic correctness of the generated policies. While they may compile successfully, a single missing condition or a misinterpreted attribute can redefine access boundaries, leading to unintended permissions. This subtlety poses a serious threat, as these flaws often go unnoticed, allowing access to sensitive resources that should be restricted.

Who's Being Targeted

Organizations leveraging AI for policy generation are at risk. As LLMs become integrated into engineering workflows, developers rely on them to automate the creation of security rules and access control policies. This reliance can lead to a false sense of security, as the generated policies may not align with the intended access restrictions. The continuous deployment of these flawed policies can result in a drift towards over-permissioned environments, where employees have access to more data than necessary.

Gupta's research indicates that many organizations may believe they are enforcing a least privilege model, while in reality, they are expanding their attack surface due to these unnoticed flaws. The risk compounds as more policies are generated, creating a complex web of security issues that are difficult to manage.

Tactics & Techniques

The recurring failure patterns identified in LLM-generated policies include:

  • Missing contextual restraints: Policies intended to limit access based on specific criteria, like region or department, may lack these conditions entirely, leading to global access.
  • Absence of deny logic: Many policies rely on a baseline deny posture, but LLMs may only capture exceptions without enforcing the underlying restrictions, resulting in broader access than intended.
  • Hallucination of attributes: LLMs can introduce non-existent attributes, causing unpredictable behavior at runtime.
  • Dropped temporal conditions: Policies that should control access based on time or session context may be simplified into static rules, leading to always-on access.
  • Action misclassification: Intended restrictions on sensitive actions may be misinterpreted, allowing broader operations than intended.

These failings stem from the AI's tendency to simplify language, which can lead to significant security implications. Over time, even minor deviations can accumulate, creating a large attack surface that is challenging to navigate.

Defensive Measures

To mitigate these risks, organizations should not abandon LLMs but instead revise their trust model regarding policy generation. Key recommendations include:

  • Validation layers: Introduce checks between policy generation and enforcement to ensure all necessary components are present and correct.
  • Testing policies: Policies should be tested for correctness, not just compiled, to catch potential flaws before deployment.
  • Enforce deny-by-default principles: Ensure that policies explicitly restrict access unless specified otherwise.
  • Treat authorization logic as high-risk: Recognize the potential for flaws and apply rigorous scrutiny to generated policies.

As organizations embrace AI-assisted security engineering, the focus should be on achieving correctness, auditability, and trust. In the realm of authorization, being 'almost correct' is simply not sufficient.

🔒 Pro insight: The reliance on LLMs for policy generation necessitates robust validation mechanisms to prevent systemic security flaws in access control.

Original article from

SWSecurityWeek· Kevin Townsend
Read Full Article

Related Pings

HIGHAI & Security

macOS Security Feature - Alerts Users About ClickFix Attacks

Apple's latest macOS update introduces a feature that warns users about ClickFix attacks. This is crucial as ClickFix exploits social engineering to compromise devices. Stay alert and secure with these new protections!

Malwarebytes Labs·
MEDIUMAI & Security

AI Security - Evaluate AI SOC Agents with Gartner's Insights

Gartner reveals essential questions for evaluating AI SOC agents. This guidance helps teams distinguish real improvements from marketing hype, ensuring effective security operations. Don't miss out on optimizing your cybersecurity strategy!

BleepingComputer·
MEDIUMAI & Security

Coro Enhances AI Security Operations with MCP Capabilities

Coro has launched new MCP capabilities to simplify security operations using AI workflows. This innovation allows users to manage security data via tools like ChatGPT, enhancing efficiency. It's a game-changer for organizations with limited IT resources, making cybersecurity easier to navigate.

Help Net Security·
HIGHAI & Security

ChatGPT Data Leakage - Hidden Outbound Channel Discovered

A serious vulnerability in ChatGPT allows sensitive data to be leaked without user knowledge. This affects anyone sharing personal information in conversations. Users must be aware of the risks and take precautions to protect their data.

Check Point Research·
HIGHAI & Security

Anthropic's Mythos AI Model - Details Leaked Amid Concerns

A data leak revealed Anthropic's Mythos, an advanced AI model aimed at cybersecurity. This raises concerns about its impact on cyber defense. The company plans a cautious rollout to enterprise security teams.

CSO Online·
HIGHAI & Security

OpenClaw - AI Agent Ecosystems Create Security Risks

OpenClaw's AI agent ecosystems are raising security alarms. These systems could be exploited, leading to serious vulnerabilities. Organizations must act now to protect their data.

Cybersecurity Dive·