Researchers Uncover Simple Technique to Extract ChatGPT Training Data

Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to a team of researchers at Google DeepMind, Cornell University, and four other universities who tested the hugely popular generative AI chatbot's susceptibility to leaking data when prompted in a specific way. 'Poem' as a Trigger Word In a report this week, the researchers described how they got ChatGPT to spew out memorized portions of its training data merely by prompting it to repeat words like "Poem," "Company," "Send," "Make," and "Part" forever. After a few hundred times, ChatGPT began generating "Often nonsensical" output, a small fraction of which included memorized training data such as an individual's email signature and personal contact information. The researchers discovered that some words were better at getting the generative AI model to spill memorized data than others. Prompting the chatbot to repeat the word "Company" caused it to emit training data 164 times more often than other words, such as "Know." Data that the researchers were able to extract from ChatGPT in this manner included personally identifiable information on dozens of individuals; explicit content; verbatim paragraphs from books and poems; and URLs, unique user identifiers, bitcoin addresses, and programming code. A Potentially Big Privacy Issue? "Using only $200 USD worth of queries to ChatGPT, we are able to extract over 10,000 unique verbatim memorized training examples," the researchers wrote in their paper titled "Scalable Extraction of Training Data from Language Models." "Our extrapolation to larger budgets suggests that dedicated adversaries could extract far more data," they wrote. The researchers estimated an adversary could extract 10 times more data with more queries. The tendency for such memorization increases with the size of the training data. Researchers have shown how such memorized data is often discoverable in a model's output. Other researchers have shown how adversaries can use so-called divergence attacks to extract training data from an LLM. A divergence attack is one in which an adversary uses intentionally crafted prompts or inputs to get an LLM to generate outputs that diverge significantly from what it would typically produce. In many of these studies, researchers have used open source models - where the training datasets and algorithms are known - to test the susceptibility of LLM to data memorization and leaks. The studies have also typically involved base AI models that have not been aligned to operate in a manner like an AI chatbot such as ChatGPT. A Divergence Attack on ChatGPT The latest study is an attempt to show how a divergence attack can work on a sophisticated closed, generative AI chatbot whose training data and algorithms remain mostly unknown. The study involved the researchers developing a way to get ChatGPT "To 'escape' out of its alignment training" and getting it to "Behave like a base language model, outputting text in a typical Internet-text style." The prompting strategy they discovered caused precisely such an outcome, resulting in the model spewing out memorized data. To verify that the data the model was generating was indeed training data, the researchers first built an auxiliary dataset containing some 9 terabytes of data from four of the largest LLM pre-training datasets - The Pile, RefinedWeb, RedPajama, and Dolma. They then compared the output data from ChatGPT against the auxiliary dataset and found numerous matches. The researchers figured they were likely underestimating the extent of data memorization in ChatGPT because they were comparing the outputs of their prompting only against the 9-terabyte auxiliary dataset. "Our paper suggests that training data can easily be extracted from the best language models of the past few years through simple techniques."

This Cyber News was published on www.darkreading.com. Publication date: Fri, 01 Dec 2023 14:00:24 +0000


Cyber News related to Researchers Uncover Simple Technique to Extract ChatGPT Training Data

Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
7 months ago Darkreading.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
7 months ago 404media.co
Google Researchers Find ChatGPT Queries Collect Personal Data - The LLMs are evolving rapidly with continuous advancements in their research and applications. Recently, cybersecurity researchers at Google discovered how threat actors can exploit ChatGPT queries to collect personal data. StorageGuard scans, ...
7 months ago Cybersecuritynews.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
4 months ago Imperva.com
6 Best Cybersecurity Training for Employees in 2024 - Cybersecurity awareness training programs are comprehensive, long-term products that show your workforce how to spot security threats and potential attacks. Cybersecurity training products typically offer informational videos, quizzes, and phishing ...
5 months ago Esecurityplanet.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
5 months ago Venturebeat.com
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
6 months ago Cyberdefensemagazine.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
3 months ago Itsecurityguru.org
Cybersecurity Training for Small Businesses - The importance of cybersecurity training for small businesses cannot be overstated in today's increasingly digital world. In conclusion, cybersecurity training is essential for small businesses to protect themselves against cyber threats. There are ...
4 months ago Securityzap.com
Google DeepMind Researchers Uncover ChatGPT Vulnerabilities - Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI's ChatGPT, according to a report from 404 Media. This discovery prompts apprehensions regarding ...
6 months ago Cysecurity.news
How to Safeguard Your Data Through Security Awareness Training? - As cybercriminals employ increasingly advanced tactics, IT security awareness training becomes a pivotal defense mechanism. This article delves deeper into the imperative of such training and provides actionable tips to enhance the effectiveness of ...
2 months ago Cybersecurity-insiders.com
Mastering Cybersecurity: Developer Training - Discover how to create an effective and engaging training program for your developers. Create a security training program with clearly defined goals to influence your developers to prioritize learning. Developers are likelier to participate and exert ...
5 months ago Feeds.dzone.com
ChatGPT Spills Secrets in Novel PoC Attack - A team of researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack for extracting key architectural information from proprietary large language models such as ChatGPT and ...
3 months ago Darkreading.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
4 months ago Venturebeat.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
5 months ago Cysecurity.news
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
6 months ago Cyberdefensemagazine.com
The Future of Phishing Email Training for Employees in Cybersecurity - One common method they use is through phishing emails. To counter this changing threat, companies must give importance to providing phishing email training for employees on identifying and responding properly to phishing attempts. Standard training ...
1 month ago Hackread.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
6 months ago Bleepingcomputer.com
Cybersecurity Training for Business Leaders - This article explores the significance of cybersecurity training for business leaders and its crucial role in establishing a secure and resilient business environment. By examining the key components of effective training programs and the ...
5 months ago Securityzap.com
Hangzhou's Cybersecurity Breakthrough: How ChatGPT Elevated Ransomware Resolution - The Chinese media reported on Thursday that local police have arrested a criminal gang from Hangzhou who are using ChatGPT for program optimization to carry out ransomware attacks for the purpose of extortion. An organization in the Shangcheng ...
6 months ago Cysecurity.news
Understanding CAT Culture in Cybersecurity: Collaboration, Awareness, and Training - In the dynamic and ever-evolving landscape of cybersecurity, organizations are increasingly recognizing the importance of fostering a robust security culture to mitigate risks and safe-guard sensitive data. One such approach gaining traction is the ...
2 months ago Cybersecurity-insiders.com
Navigating an AI-Enhanced Landscape of Cybersecurity in 2024: A Proactive Approach to Phishing Training in Enterprises - As we stand at the precipice of 2024, the intersection of artificial intelligence and cybersecurity looms large, with phishing attacks emerging as a focal point of concern. The integration of AI is poised to redefine the threat landscape, introducing ...
6 months ago Securityboulevard.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)