π―GitGuardian found that a lot of sensitive information, like passwords and API keys, is being accidentally shared on GitHub because of AI tools that help developers write code. They are now offering a new way to catch these leaks before they happen.
What Happened
In a startling report, GitGuardian unveiled that 2025 saw an 81% surge in leaks related to AI services, with a staggering 29 million secrets exposed on public GitHub repositories. This increase is attributed to the rapid adoption of AI in software development, which has outpaced the ability to manage and secure sensitive information effectively. The report, part of GitGuardian's fifth edition of the "State of Secrets Sprawl," indicates that the secret leak rate for AI-assisted coding is alarmingly high, averaging 3.2%, compared to a baseline of 1.5%.
The report highlights a significant change in the software landscape. The number of public commits has increased by 43% year-over-year, and the rate of secret leaks is growing even faster than the developer population. This means that while more developers are contributing to projects, the risk of exposing sensitive information is escalating at an unprecedented rate.
Moreover, the report reveals that 28,649,024 new secrets were exposed in public GitHub commits, marking a 34% year-over-year increase and the largest annual jump in the reportβs history. This alarming trend is primarily driven by the design of authentication systems, which are failing to keep pace with the rapid creation of credentials needed for AI integrations.
New Risks from AI Coding Assistants
The rise of AI coding assistants such as Cursor, Claude Code, and GitHub Copilot has introduced new vulnerabilities. These tools can read files, run shell commands, and interact with external systems, increasing the chances of sensitive data exposure before code even reaches a repository. Developers might inadvertently expose API keys or other credentials during debugging or through AI prompts. GitGuardian's findings indicate that this exposure often happens in real-time and outside traditional security controls, creating a significant blind spot for organizations.
To combat these risks, GitGuardian is enhancing its ggshield tool with hook-based secret scanning specifically designed for AI coding environments. This new feature scans for secrets at critical points in the workflow: before prompt submission, before tool execution, and after tool usage, providing organizations with preventive controls to mitigate risks.
Who's Affected
The implications of these findings are extensive, affecting organizations across various sectors that utilize AI in their software development processes. Developers, especially those using AI tools like Claude Code, are at a higher risk of unintentionally leaking sensitive information. The report emphasizes that internal repositories are particularly vulnerable, being six times more likely to contain hardcoded secrets compared to public ones. Additionally, the report reveals that 28% of incidents stem from leaks in collaboration tools, indicating that sensitive information is not just confined to code repositories. This broad exposure raises concerns for security teams who must now contend with a more complex threat landscape.
What Data Was Exposed
GitGuardian's findings show that 1,275,105 AI service credentials were leaked, marking a significant rise in the number of exposed secrets. These leaks are particularly concerning because they often slip through security measures designed for traditional workflows. The report also highlights that long-lived secrets dominate the landscape, with 60% of policy violations involving credentials that persist over time.
The report indicates that credential governance is lagging behind the rapid pace of AI development, with AI-assisted commits leaking secrets at roughly double the baseline rate. This gap is primarily due to the convenience of creating credentials during rapid development cycles, often without proper scoping or management.
Moreover, the report indicates that remediation efforts are failing at scale, with 64% of valid secrets from 2022 still unrevoked in 2026. This lack of effective governance and remediation strategies poses a serious risk to organizations relying on AI technologies.
What You Should Do
Organizations must adapt their security strategies to address the growing risks associated with AI service leaks. This includes implementing stronger governance for non-human identities (NHIs) and enhancing training for developers using AI tools. Security teams should prioritize identifying and managing exposed secrets, ensuring that sensitive information is not hardcoded in repositories or configuration files.
Investing in tools that can automate the detection and remediation of leaked secrets is crucial. GitGuardian's report suggests that organizations need to treat NHIs as first-class assets, integrating dedicated governance and context into their security programs. Furthermore, organizations should adopt new authentication models that eliminate static credential storage and replace them with short-lived credentials.
By doing so, they can better protect against the rising tide of AI-related leaks and secure their sensitive information. The need for a proactive approach to credential management has never been more critical as the velocity of AI development continues to outpace existing governance frameworks.
As AI coding tools become integral to development, organizations must enhance their security frameworks to address the unique risks posed by these technologies. Implementing real-time secret scanning can significantly mitigate the threat of credential leaks.




