OpenAI plugs ShadowLeak bug in ChatGPT which allowed anybody access to everybodys gmail emails and any other integrations

ChatGPT’s research assistant sprung a leak – since patched – that let attackers steal Gmail secrets with just a single carefully crafted email.

Deep Research, a tool unveiled by OpenAI in February, enables users to ask ChatGPT to browse the internet or their personal email inbox and generate a detailed report on its findings. The tool can be integrated with apps like Gmail and GitHub, allowing people to do deep dives into their own documents and messages without ever leaving the chat window.

Cybersecurity outfit Radware this week disclosed a critical flaw in the feature, dubbed “ShadowLeak,” warning that it could allow attackers to siphon data from inboxes with no user interaction whatsoever. Researchers showed that simply sending a maliciously crafted email to a Deep Research user was enough to get the agent to exfiltrate sensitive data when it later summarized that inbox.

The attack relies on hiding instructions inside the HTML of an email using white-on-white text, CSS tricks, or metadata, which a human recipient would never notice. When Deep Research later crawls the mailbox, it dutifully follows the attacker’s hidden orders and sends the contents of messages, or other requested data, to a server controlled by the attacker.

Radware stressed that this isn’t just a prompt injection on the user’s machine. The malicious request is executed from OpenAI’s own infrastructure, making it effectively invisible to corporate security tooling.

That server-side element is what makes ShadowLeak particularly nasty. There’s no dodgy link for a user to click, and no suspicious outbound connection from the victim’s laptop. The entire operation happens in the cloud, and the only trace is a benign-looking query from the user to ChatGPT asking it to “summarize today’s emails”. […] The researchers argue that the risk isn’t limited to Gmail either. Any integration that lets ChatGPT hoover up private documents could be vulnerable to the same trick if input sanitization isn’t watertight.

[…]

Radware said it reported the ShadowLeak bug to OpenAI on June 18 and the company released a fix on September 3. The Register asked OpenAI what specific changes were made to mitigate this vulnerability and whether it had seen any evidence that the vulnerability had been exploited in the wild before disclosure, but did not receive a response.

Radware is urging organizations to treat AI agents as privileged users and to lock down what they can access. HTML sanitization, stricter control over which tools agents can use, and better logging of every action taken in the cloud are all on its list of recommendations. ®

Source: OpenAI plugs ShadowLeak bug in ChatGPT • The Register

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com