LLMs Generate Predictable Passwords: A Security Risk

A recent analysis has uncovered that Large Language Models (LLMs) generate passwords that are predictably structured, posing a significant security risk. This flaw could lead to unauthorized access to sensitive data and accounts.

VulnerabilitiesHIGHUpdated: Published: 📰 2 sources

Original Reporting

SSSchneier on Security

AI Summary

CyberPings AI·Reviewed by Rohit Rana

🎯Imagine if a robot made your passwords, but it always picked the same few letters and numbers. That's what LLMs do! They aren't random enough, which means hackers could guess them easily. It's like using 'password123' but thinking it's super secure. We need to be careful about using AI to create passwords.

What Happened

A recent analysis revealed a concerning flaw in how Large Language Models (LLMs) generate passwords. These AI systems create passwords that follow predictable patterns, making them less secure than random passwords. Out of 50 generated passwords, many began with an uppercase 'G' followed by the number '7', showcasing a clear lack of randomness.

The study found that certain characters appeared far more frequently than others. For instance, characters like 'L', '9', 'm', '2', '$', and '#' were present in all 50 passwords, while others like '5' and '@' appeared only once. This uneven distribution indicates a significant flaw in the randomness of the passwords. Additionally, none of the passwords contained repeating characters, which is statistically unlikely for truly random passwords. This design choice seems to stem from Claude, the AI, trying to appear less random.

Interestingly, the analysis showed that there were only 30 unique passwords among the 50 generated. The most common password, 'G7$kL9#mQ2&xP4!w', appeared 18 times, giving it a staggering 36% probability of being chosen. This is far higher than what you would expect from a secure 100-bit password, which should be much more random and varied.

Why Should You Care

You might think, "Why does this matter to me?" Well, if AI systems are creating accounts or managing sensitive information, they need secure passwords. Predictable passwords are like leaving your front door wide open. If an AI generates a password that is easy to guess, it could lead to unauthorized access to your accounts or data.

Consider your own online accounts. If an AI is creating passwords for you, and those passwords are easily guessable, it puts your personal information at risk. Just like you wouldn’t use '123456' as a password, you shouldn’t rely on AI-generated ones that follow predictable patterns. The security of your data could hinge on the randomness of these passwords.

What's Being Done

Experts are now raising alarms about the implications of AI-generated passwords. The focus is on improving the algorithms that generate passwords to ensure better randomness and security. Here are some immediate actions you can take:

  • Use a password manager that generates truly random passwords for you.
  • Avoid relying on AI-generated passwords for sensitive accounts until improvements are made.
  • Stay informed about updates in AI technology and security practices.

Security professionals are closely monitoring developments in LLMs and their applications, especially as AI continues to evolve in managing sensitive tasks. Expect discussions around enhancing password security protocols in AI systems to become more prominent.

The Architectural Incompatibility

Recent research from AI security firm Irregular and Kaspersky has confirmed that LLMs generate structurally predictable passwords that traditional entropy meters misjudge as secure. The models are not generating passwords but retrieving them based on training data. This flaw allows adversaries to create model-specific attack dictionaries, significantly reducing the effective keyspace.

For instance, Irregular’s tests showed that Claude Opus 4.6 produced passwords with only 27 bits of entropy, far below the approximately 98 bits expected from a cryptographically secure pseudorandom number generator (CSPRNG). This discrepancy highlights the fundamental incompatibility between LLMs and cryptographic applications, as LLMs are trained to predict the most likely next character rather than generate random sequences.

The Agentic Injection Problem

A significant risk arises when AI coding agents autonomously generate credentials in development environments without explicit instructions. These credentials often end up in configuration files or version control systems, exposing organizations to security vulnerabilities. Traditional secret scanning tools fail to detect these LLM-generated passwords due to their reliance on known format patterns rather than character distribution analysis.

Organizational Response Priorities

Organizations should conduct audits of AI-assisted repositories, particularly focusing on configuration files and environment variables. Credentials that exhibit LLM-characteristic distributions should be scrutinized and rotated if their origins cannot be traced to a CSPRNG. Additionally, organizations should mandate explicit CSPRNG usage in AI-generated credentials to mitigate the risks associated with agentic injection.

🔒 Pro Insight

The predictable nature of LLM-generated passwords highlights a critical flaw in their design, making them unsuitable for secure applications. Organizations must take proactive measures to audit and secure their systems against these vulnerabilities.

Related Pings