LangChain Vulnerabilities - Exposing Files and Secrets
Basically, some flaws in popular AI tools could let hackers steal important files and secrets.
Three critical vulnerabilities in LangChain and LangGraph could expose sensitive files and secrets. Millions of users are affected, and immediate patching is crucial to mitigate risks.
The Flaw
Cybersecurity researchers have uncovered three significant vulnerabilities affecting LangChain and LangGraph, two widely used frameworks in the AI development community. These flaws, if exploited, could lead to unauthorized access to sensitive data, including filesystem files, environment secrets, and conversation histories. LangChain, with over 52 million downloads last week alone, serves as the backbone for many applications powered by Large Language Models (LLMs). Meanwhile, LangGraph builds on this foundation, allowing for more complex workflows.
The vulnerabilities are as follows:
- CVE-2026-34070: A path traversal vulnerability that allows attackers to access arbitrary files through a specially crafted prompt.
- CVE-2025-68664: A deserialization flaw that can leak API keys and environment secrets.
- CVE-2025-67644: An SQL injection vulnerability that enables manipulation of SQL queries in LangGraph.
What's at Risk
The potential impact of these vulnerabilities is substantial. Successful exploitation could allow attackers to siphon off sensitive files such as Docker configurations and access critical conversation histories tied to sensitive workflows. Each vulnerability presents a unique attack vector, making it easier for malicious actors to target enterprises that rely on these frameworks. Given the interconnected nature of software dependencies, a flaw in LangChain can ripple through countless applications and libraries that utilize its code.
Researchers have noted that the vulnerabilities highlight a concerning trend: AI frameworks are not immune to classic security issues. This is particularly alarming given the rapid pace at which threat actors exploit newly disclosed vulnerabilities.
Patch Status
Fortunately, patches have been released to address these vulnerabilities. Users are urged to update their installations immediately to mitigate risks. The patch details are as follows:
- CVE-2026-34070: Requires updating to langchain-core version 1.2.22 or higher.
- CVE-2025-68664: Users should upgrade to langchain-core versions 0.3.81 or 1.2.5.
- CVE-2025-67644: The fix is available in langgraph-checkpoint-sqlite version 3.0.1.
Immediate Actions
For developers and organizations using LangChain or LangGraph, immediate action is crucial. Here are some steps to take:
- Update your frameworks: Ensure you are using the latest patched versions.
- Review your code: Check for any potential vulnerabilities in your implementations.
- Monitor for unusual activity: Keep an eye on your systems for any signs of unauthorized access.
As the cybersecurity landscape evolves, it is essential to remain vigilant. The interconnected nature of AI frameworks means that a single vulnerability can have far-reaching consequences. By staying informed and proactive, developers can help safeguard their applications and the sensitive data they handle.
The Hacker News