ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information

API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises.
Plugins provide AI chatbots like ChatGPT access and permissions to perform tasks on behalf of users within third party websites.
These security flaws introduce a new attack vector and could enable bad actors to gain control of accounts on third party websites and allow access to Personal Identifiable Information and other sensitive user data stored within third party applications.
ChatGPT plugins extend the model's abilities, allowing the chatbot to interact with external services.
The integration of these third-party plugins significantly enhances ChatGPT's applicability across various domains, from software development and data management, to educational and business environments.
When organisations leverage such plugins, it subsequently gives ChatGPT permission to send an organisation's sensitive data to a third party website, and allow access to private external accounts.
Notably, in November 2023, ChatGPT introduced a new feature, GPTs, an alike concept to plugins.
The Salt Labs team uncovered three different types of vulnerabilities within ChatGPT plugins.
The first of which was noted within ChatGPT itself when users install new plugins.
During this process, ChatGPT redirects a user to the plugin website to receive a code to be approved by that individual.
When ChatGPT receives the approved code from a user, it automatically installs the plugin and can interact with that plugin on behalf of the user.
Salt Labs researchers discovered that an attacker could exploit this function, to instead deliver users a code approval with a new malicious plugin, enabling an attacker to automatically install their credentials on a victim's account.
Any message that the user writes in ChatGPT may be forwarded to a plugin, meaning an attacker would have access to a host of proprietary information.
The second vulnerability was discovered within PluginLab, a framework developers and companies use to develop plugins for ChatGPT. During the installation, Salt Labs researchers uncovered that PluginLab did not properly authenticate user accounts, which would have allowed a prospective attacker to insert another user ID and get a code that represents the victim, which leads to account takeover on the plugin.
The third and final vulnerability uncovered within several plugins was OAuth redirection manipulation.
Ai, it is an account takeover on the ChatGPT plugin itself.
In this vulnerability, an attacker could send a link to the victim.
Several plugins do not validate the URLs, which means that an attacker can insert a malicious URL and steal user credentials.
Ai, an attacker would then have the credentials of the victim, and can take over their account in the same way.
Upon discovering the vulnerabilities, Salt Labs' researchers followed coordinated disclosure practices with OpenAI and third-party vendors, and all issues were remediated quickly, with no evidence that these flaws had been exploited in the wild.


This Cyber News was published on www.itsecurityguru.org. Publication date: Wed, 13 Mar 2024 16:43:06 +0000


Cyber News related to ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information

XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
4 months ago Imperva.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
3 months ago Itsecurityguru.org
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
5 months ago Venturebeat.com
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
6 months ago Cyberdefensemagazine.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
4 months ago Venturebeat.com
Researchers Uncover Simple Technique to Extract ChatGPT Training Data - Can getting ChatGPT to repeat the same word over and over again cause it to regurgitate large amounts of its training data, including personally identifiable information and other data scraped from the Web? The answer is an emphatic yes, according to ...
7 months ago Darkreading.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
7 months ago 404media.co
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
6 months ago Cyberdefensemagazine.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
5 months ago Cysecurity.news
Fake VPN Chrome extensions force-installed 1.5 million times - Three malicious Chrome extensions posing as VPN infected were downloaded 1.5 million times, acting as browser hijackers, cashback hack tools, and data stealers. According to ReasonLabs, which discovered the malicious extensions, they are spread via ...
6 months ago Bleepingcomputer.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
Penetration Testing for Sensitive Data Exposure in Enterprise Networks: Everything You Need to Know! - The amount of data enterprises store is much bigger than SMBs. A lot of this data includes sensitive information of customers and clients such as bank details, social security numbers, emails, contact numbers, etc. For those new to data security, ...
6 months ago Securityboulevard.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
6 months ago Bleepingcomputer.com
Hangzhou's Cybersecurity Breakthrough: How ChatGPT Elevated Ransomware Resolution - The Chinese media reported on Thursday that local police have arrested a criminal gang from Hangzhou who are using ChatGPT for program optimization to carry out ransomware attacks for the purpose of extortion. An organization in the Shangcheng ...
6 months ago Cysecurity.news
Smashing Security podcast #307: ChatGPT and the Minister for Foreign Affairs Graham Cluley - Could a senior Latvian politician really be responsible for scamming hundreds of "Mothers-of-two" in the UK? And should we be getting worried about the AI wonder that is ChatGPT? All this and more is discussed in the latest edition of the "Smashing ...
1 year ago Grahamcluley.com
Google Researchers Find ChatGPT Queries Collect Personal Data - The LLMs are evolving rapidly with continuous advancements in their research and applications. Recently, cybersecurity researchers at Google discovered how threat actors can exploit ChatGPT queries to collect personal data. StorageGuard scans, ...
7 months ago Cybersecuritynews.com
Google DeepMind Researchers Uncover ChatGPT Vulnerabilities - Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI's ChatGPT, according to a report from 404 Media. This discovery prompts apprehensions regarding ...
6 months ago Cysecurity.news
One Year of ChatGPT: Domains Evolved by Generative AI - ChatGPT has recently completed one year after its official launch. Since it introduced the world to the future, by showing what a human-AI interaction looks like, ChatGPT has eventually transformed the entire tech realm into a cultural phenomenon. ...
6 months ago Cysecurity.news
Google Takes Down Over 50,000 Instances of Malicious Chrome Extensions - Google recently took down over 50,000 Chrome browser extensions after discovering that they were involved in malicious activity. The malicious activity included advertising click fraud, downloading malware, and displaying adware. According to Google, ...
1 year ago Thehackernews.com
Singapore open to ChatGPT use in schools, but urges caution - Singapore supports the use of artificial intelligence tools such as ChatGPT in schools, but wants to ensure students do not become over-reliant on them and understand the limits of these technologies. As such tools emerge and become more pervasive ...
1 year ago Zdnet.com
OpenAI blocks state-sponsored hackers from using ChatGPT - OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated ...
4 months ago Bleepingcomputer.com
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
6 months ago Securityweek.com
Malicious Chrome VPN Extensions Installed 1.5M Times Browsers - In a recent cybersecurity revelation, a highly sophisticated cyber attack campaign has emerged, weaving a web of deceit through malicious web extensions cunningly disguised as VPNs. ReasonLabs, a cybersecurity firm, has discovered online piracy ...
6 months ago Cybersecuritynews.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)