VulnerabilitiesHIGH

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex Vulnerability

Featured image for OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex Vulnerability
THThe Hacker News
OpenAIChatGPTCodexGitHubDNS
🎯

Basically, a flaw in ChatGPT let bad actors steal user data without anyone knowing.

Quick Summary

OpenAI has patched a critical vulnerability in ChatGPT that allowed data exfiltration without user consent. This flaw posed serious risks to user privacy and security. Organizations must enhance their security measures to protect sensitive information in AI environments.

The Flaw

A serious vulnerability was discovered in OpenAI's ChatGPT, which permitted the unauthorized exfiltration of sensitive user data. According to Check Point, a single malicious prompt could exploit this flaw, turning a normal conversation into a covert data leak. This vulnerability bypassed existing safeguards, allowing attackers to extract user messages, uploaded files, and other sensitive content without the user's knowledge.

The issue stemmed from a hidden communication channel in the Linux runtime used by the AI. This side channel utilized DNS requests to transmit data, effectively circumventing the AI's built-in guardrails. In essence, the AI did not recognize this behavior as an external data transfer, creating a security blind spot.

What's at Risk

The implications of this vulnerability are significant. Users of ChatGPT, especially in enterprise settings, may unknowingly expose sensitive information. This includes personal messages and confidential files, which could lead to identity theft or corporate espionage. The risk is magnified when malicious prompts are embedded in custom GPTs, making it easier for attackers to deploy this technique without direct user interaction.

The potential for exploitation raises alarms about the security of AI tools. As these systems become more integrated into daily workflows, the stakes for data privacy and protection grow higher. Organizations must recognize that relying solely on the AI's native security features is insufficient.

Patch Status

OpenAI responded promptly to this discovery. The vulnerability was patched on February 20, 2026, following responsible disclosure from Check Point. While there is no evidence that the flaw was exploited in the wild, the existence of such a vulnerability highlights the need for continuous vigilance in security practices.

In addition to the ChatGPT flaw, a separate command injection vulnerability was found in OpenAI's Codex, which could have led to GitHub token compromises. This further emphasizes the need for robust security measures across all AI platforms.

Immediate Actions

Organizations using AI tools like ChatGPT must take proactive steps to safeguard their data. Implementing additional security layers can help mitigate risks associated with prompt injections and other unexpected behaviors. Independent visibility and layered protection should be prioritized to ensure that sensitive data remains secure.

As Eli Smadja from Check Point Research noted, the evolution of AI platforms necessitates a rethinking of security architecture. Companies should not assume that AI tools are secure by default. Instead, they should actively work to strengthen their defenses against potential vulnerabilities and threats.

In conclusion, as AI technology continues to advance, so too must our approach to security. The recent vulnerabilities in OpenAI's systems serve as a reminder that vigilance and proactive measures are essential in protecting sensitive information in an increasingly digital world.

🔒 Pro insight: The exploitation of hidden communication channels in AI systems highlights the urgent need for enhanced security protocols in AI development.

Original article from

THThe Hacker News
Read Full Article

Related Pings

HIGHVulnerabilities

Fortinet BIG-IP Vulnerability - Reclassified as RCE Threat

A flaw in Fortinet's BIG-IP software has been reclassified as a remote code execution threat. This raises the stakes for organizations using this software, as attackers could gain control of their systems. Immediate action is needed to protect against potential exploitation.

Dark Reading·
HIGHVulnerabilities

OpenAI Patches ChatGPT Flaw Allowing Data Smuggling via DNS

OpenAI has patched a vulnerability in ChatGPT that allowed data to be smuggled through DNS. This flaw posed risks for sensitive data in regulated industries. Organizations must ensure their AI systems are secure to prevent potential breaches.

The Register Security·
CRITICALVulnerabilities

Citrix NetScaler - Critical Memory Flaw Under Attack

A critical vulnerability in Citrix NetScaler is being actively exploited, risking sensitive data exposure. Administrators must act quickly to secure their systems against this threat.

BleepingComputer·
HIGHVulnerabilities

Citrix NetScaler Vulnerability Added to CISA's Catalog

CISA has added a new vulnerability to its KEV Catalog. Known as CVE-2026-3055, this flaw affects Citrix NetScaler. It's crucial for organizations to address this risk promptly.

CISA Advisories·
HIGHVulnerabilities

Smart Slider Plugin Vulnerability - Widespread Compromise Possible

A serious flaw in the Smart Slider 3 plugin threatens over 500,000 WordPress sites. This vulnerability could allow attackers to access sensitive data and compromise site security. Website owners must act quickly to protect their sites from potential exploitation.

SC Media·
HIGHVulnerabilities

Exposed API Keys - Major Services at Risk Revealed

A recent report reveals nearly 2,000 API keys for major services like AWS and GitHub were found exposed online. This puts countless users at risk. Organizations must act quickly to secure their credentials and protect sensitive data.

SC Media·