OpenAI’s ChatGPT Operator, a cutting-edge research preview tool designed for ChatGPT Pro users, has recently come under scrutiny for vulnerabilities that could expose sensitive personal data through prompt injection exploits. Navigating to Sensitive Pages: The attacker tricks the Operator into accessing authenticated pages containing sensitive personal information (PII), such as email addresses or phone numbers. For example, in one demonstration, the Operator was tricked into extracting a private email address from a user’s YC Hacker News account and pasting it into a third-party server’s input field. Out-of-Band Confirmation Requests: When crossing website boundaries or executing complex actions, Operator displays intrusive confirmation dialogues explaining the potential risks. Since Operator sessions run server-side, OpenAI potentially has access to session cookies, authorization tokens, and other sensitive data. Additionally, websites could adopt measures to block AI agents from accessing sensitive pages by identifying them through unique User-Agent headers. Inline Confirmation Requests: For certain actions, the Operator requests user confirmation within the chat interface before proceeding. Hijacking Operator via Prompt Injection: Malicious instructions are hosted on platforms like GitHub issues or embedded in website text. Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis. Leaking Data via Third-Party Websites: Operators are further manipulated to copy this information and paste it into a malicious web page that captures the data without requiring a form submission. Despite these measures, prompt injection attacks remain partially effective due to their probabilistic nature—both the attacks and defenses depend on specific conditions being met. Kaaviya is a Security Editor and fellow reporter with Cyber Security News.
This Cyber News was published on cybersecuritynews.com. Publication date: Tue, 18 Feb 2025 06:40:06 +0000