AI Coding Assistants - Secrets Leaked at Alarming Rate
Basically, AI coding tools are accidentally sharing sensitive information more often than human programmers.
AI coding assistants are leaking secrets at alarming rates. With a 34% rise in overall leaks, developers face significant risks to data security. GitGuardian highlights the urgent need for better practices to protect sensitive information.
What Changed
In 2025, the landscape of data security faced a significant shift. Secrets leaked via public GitHub commits rose by 34%, marking the highest increase since GitGuardian began tracking these incidents in 2021. The report revealed that AI coding assistants, like Claude Code, are twice as likely to leak secrets compared to traditional human developers. This alarming trend highlights the growing vulnerabilities associated with AI tools in software development.
The total number of leaked secrets reached approximately 28.65 million in 2025, a substantial rise from 21 million in 2024. This sharp increase indicates a troubling acceleration in the exposure of sensitive information, particularly as AI services become more integrated into coding practices.
How This Affects Your Data
The implications of these leaks are profound. GitGuardian's report revealed that 1.5% of all commits leak secrets, but this figure jumps to 3.2% for AI-assisted commits. This means that developers using AI tools may inadvertently expose sensitive data, increasing the risk of data breaches and insider threats. In addition, 64% of secrets exposed in previous years remain active, indicating that many vulnerabilities are not being addressed.
Moreover, the report highlighted that AI-related secrets were five times more likely to be leaked than those tied to core services. The rise of AI in development processes raises questions about the security measures in place to protect sensitive information, especially given that about a third of internal repositories contained hardcoded secrets.
Who's Responsible
The responsibility for these leaks is multifaceted. While AI coding assistants are at the forefront, developers must also bear some accountability. The copying and pasting of plaintext credentials into collaboration tools like Slack and Jira contributed to 28% of internal incidents. This behavior not only increases the risk of accidental leaks but also highlights a lack of awareness regarding secure coding practices.
GitGuardian's findings emphasize the need for developers to prioritize security in their workflows. As AI tools become more prevalent, understanding their limitations and potential pitfalls is crucial for maintaining data integrity.
How to Protect Your Privacy
To mitigate the risks associated with AI coding assistants, GitGuardian recommends several best practices. Developers should scan code changes for secrets before committing them to repositories. Treating internal repositories as potential leak sources is vital, rather than relying on obscurity for security.
Additionally, storing secrets in a secure, centralized vault and automating their rotation can significantly reduce exposure risks. GitGuardian advises prioritizing the rotation of leaked secrets based on overall risk rather than solely on their validity. By implementing these strategies, organizations can better safeguard their sensitive information against the rising tide of leaks.
SC Media