ChatGPT Spills Secrets in Novel PoC Attack

A team of researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack for extracting key architectural information from proprietary large language models such as ChatGPT and Google PaLM-2.
The research showcases how adversaries can extract supposedly hidden data from an LLM-enabled chat bot so they can duplicate or steal its functionality entirely.
The attack - described in a technical report released this week - is one of several over the past year that have highlighted weaknesses that makers of AI tools still need to address in their technologies even as adoption of their products soar.
Extracting Hidden Data As the researchers behind the new attack note, little is known publicly of how large language models such as GPT-4, Gemini, and Claude 2 work.
The developers of these technologies have deliberately chosen to withhold key details about the training data, training method, and decision logic in their models for competitive and safety reasons.
Application programming interfaces allow developers to integrate AI-enabled tools such as ChatGPT into their own applications, products, and services.
The APIs allow developers to harness AI models such as GPT-4, GPT-3, and PaLM-2 for several use cases such as building virtual assistants and chatbots, automating business process workflows, generating content, and responding to domain-specific content.
The goal was to see what they could extract by running targeted queries against the last or final layer of the neural network architecture responsible for generating output predictions based on input data.
A Top-Down Attack The information in this layer can include important clues on how the model handles input data, transforms it and runs it through a complex series of processes to generate a response.
The researchers found that by attacking the last layer of many large LLMs they were able to extract substantial proprietary information on the models.
The researchers described their attack as successful in recovering a relatively small part of the targeted AI models.
Over the past year there have been numerous other reports that have highlighted weaknesses in popular GenAI models.
Earlier this month for instance, researchers at HiddenLayer released a report that described how they were able to get Google's Gemini technology to misbehave in various ways by sending it carefully structured prompts.
Others have found similar approaches to jailbreak ChatGPT and get it to generate content that it is not supposed to generate.
In December, researchers from Google DeepMind and elsewhere showed how they could extract ChatGPT's hidden training data simply by prompting it to repeat certain words incessantly.


This Cyber News was published on www.darkreading.com. Publication date: Wed, 13 Mar 2024 22:15:15 +0000


Cyber News related to ChatGPT Spills Secrets in Novel PoC Attack

You Don't Know Where Your Secrets Are - Do you know where your secrets are? If not, I can tell you: you are not alone. Hundreds of CISOs, CSOs, and security leaders, whether from small or large companies, don't know either. No matter the organization's size, the certifications, tools, ...
1 year ago Thehackernews.com
Securing the code: navigating code and GitHub secrets scanning - Enter the world of GitHub secrets scanning tools, the vigilant sentinels of your digital gala. Secrets scanning in GitHub is anchored by two fundamental strategies: proactive prevention and reactive detection, each serving a critical function in ...
11 months ago Securityboulevard.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
9 months ago Imperva.com
How to perform a proof of concept for automated discovery using Amazon Macie | AWS Security Blog - After reviewing the managed data identifiers provided by Macie and creating the custom data identifiers needed for your POC, it’s time to stage data sets that will help demonstrate the capabilities of these identifiers and better understand how ...
1 month ago Aws.amazon.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
10 months ago Venturebeat.com
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
11 months ago Cyberdefensemagazine.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
8 months ago Itsecurityguru.org
Over 12 million auth secrets and keys leaked on GitHub in 2023 - GitHub users accidentally exposed 12.8 million authentication and sensitive secrets in over 3 million public repositories during 2023, with the vast majority remaining valid after five days. The exposed secrets include account passwords, API keys, ...
8 months ago Bleepingcomputer.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
11 months ago 404media.co
ChatGPT Spills Secrets in Novel PoC Attack - A team of researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack for extracting key architectural information from proprietary large language models such as ChatGPT and ...
8 months ago Darkreading.com
Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
11 months ago Darkreading.com
Honeytokens for Peace Of Mind - If you have been tackling the realities of secrets sprawl, getting a handle on all the hardcoded credentials in your organization, then we understand the stress and the restless nights that can bring. Even a small team can add hundreds of secrets a ...
10 months ago Feeds.dzone.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
9 months ago Venturebeat.com
Privileged Access Management for DevOps - Recently, KuppingerCole released the first edition of its Leadership Compass for Privileged Access Management for DevOps. The KuppingerCole report recognizes the unique and complex challenges that exist in DevOps and other dynamic environments. The ...
1 year ago Beyondtrust.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
9 months ago Cysecurity.news
Experts released PoC exploit code for RCE in Fortinet SIEM - Russia-linked APT28 used post-compromise tool GooseEgg to exploit CVE-2022-38028 Windows flaw. Crowdfense is offering a larger 30M USD exploit acquisition program. Threat actors actively exploit JetBrains TeamCity flaws to deliver malware. PoC ...
5 months ago Securityaffairs.com
The Secret Weakness Execs Are Overlooking: Non-Human Identities - By shifting our focus to secrets security and adopting a comprehensive approach that includes robust detection, automated remediation, and integration with identity systems, organizations can significantly reduce their attack surface and bolster ...
1 month ago Thehackernews.com
CVE-2024-28236 - Vela is a Pipeline Automation (CI/CD) framework built on Linux container technology written in Golang. Vela pipelines can use variable substitution combined with insensitive fields like `parameters`, `image` and `entrypoint` to inject secrets into a ...
8 months ago
new detectors, your favorite features, and what's coming next in GitGuardian - GitGuardian Secrets Detection More detectors = more secrets caught. Every detector has its comprehensive ID card in the public documentation, outlining the secret type, its intended usage and scope, and detailed steps for revocation. If you haven't ...
10 months ago Securityboulevard.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
11 months ago Bleepingcomputer.com
Hangzhou's Cybersecurity Breakthrough: How ChatGPT Elevated Ransomware Resolution - The Chinese media reported on Thursday that local police have arrested a criminal gang from Hangzhou who are using ChatGPT for program optimization to carry out ransomware attacks for the purpose of extortion. An organization in the Shangcheng ...
10 months ago Cysecurity.news
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
11 months ago Cyberdefensemagazine.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)