OpenAI has addressed a security vulnerability in ChatGPT that could have exposed users' Gmail data. The flaw, identified by cybersecurity firm Radware, potentially allowed attackers to extract sensitive information. The vulnerability stemmed from the integration of Model Context Protocol (MCP) tools, which enable ChatGPT to connect to services like Gmail, Google Calendar, and SharePoint.
An attacker could exploit this by sending a calendar invite with a malicious 'jailbreak' prompt to a user. If the user then asked ChatGPT to review their calendar, the AI would read the invite and follow the attacker's instructions, potentially leaking private emails. The exploit could allow the AI to search a user's private emails and send the data to the attacker.
While MCP tools were initially available only in developer mode with manual approval, the risk existed that users might inadvertently approve malicious requests. The prompt would trick ChatGPT into divulging sensitive email data. OpenAI has since patched the vulnerability, preventing potential data breaches.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




