API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises.
Plugins provide AI chatbots like ChatGPT access and permissions to perform tasks on behalf of users within third party websites.
These security flaws introduce a new attack vector and could enable bad actors to gain control of accounts on third party websites and allow access to Personal Identifiable Information and other sensitive user data stored within third party applications.
ChatGPT plugins extend the model's abilities, allowing the chatbot to interact with external services.
The integration of these third-party plugins significantly enhances ChatGPT's applicability across various domains, from software development and data management, to educational and business environments.
When organisations leverage such plugins, it subsequently gives ChatGPT permission to send an organisation's sensitive data to a third party website, and allow access to private external accounts.
Notably, in November 2023, ChatGPT introduced a new feature, GPTs, an alike concept to plugins.
The Salt Labs team uncovered three different types of vulnerabilities within ChatGPT plugins.
The first of which was noted within ChatGPT itself when users install new plugins.
During this process, ChatGPT redirects a user to the plugin website to receive a code to be approved by that individual.
When ChatGPT receives the approved code from a user, it automatically installs the plugin and can interact with that plugin on behalf of the user.
Salt Labs researchers discovered that an attacker could exploit this function, to instead deliver users a code approval with a new malicious plugin, enabling an attacker to automatically install their credentials on a victim's account.
Any message that the user writes in ChatGPT may be forwarded to a plugin, meaning an attacker would have access to a host of proprietary information.
The second vulnerability was discovered within PluginLab, a framework developers and companies use to develop plugins for ChatGPT. During the installation, Salt Labs researchers uncovered that PluginLab did not properly authenticate user accounts, which would have allowed a prospective attacker to insert another user ID and get a code that represents the victim, which leads to account takeover on the plugin.
The third and final vulnerability uncovered within several plugins was OAuth redirection manipulation.
Ai, it is an account takeover on the ChatGPT plugin itself.
In this vulnerability, an attacker could send a link to the victim.
Several plugins do not validate the URLs, which means that an attacker can insert a malicious URL and steal user credentials.
Ai, an attacker would then have the credentials of the victim, and can take over their account in the same way.
Upon discovering the vulnerabilities, Salt Labs' researchers followed coordinated disclosure practices with OpenAI and third-party vendors, and all issues were remediated quickly, with no evidence that these flaws had been exploited in the wild.
This Cyber News was published on www.itsecurityguru.org. Publication date: Wed, 13 Mar 2024 16:43:06 +0000