In a recent study by Northwestern University, researchers uncovered a startling vulnerability in customized Generative Pre-trained Transformers.
While these GPTs can be tailored for a wide range of applications, they are also vulnerable to rapid injection attacks, which can divulge confidential data.
GPTs are advanced AI chatbots that can be customized by OpenAI's ChatGPT users.
They utilize the Large Language Model at the heart of ChatGPT, GPT-4 Turbo, but are augmented with more, special components that impact their user interface, such as customized datasets, prompts, and processing instructions, enabling them to perform a variety of specialized tasks.
The parameters and sensitive data that a user might use to customize the GPT could be left vulnerable to a third party.
In their study, the researchers tested over 200 custom GPTs wherein the high risk of such attacks was revealed.
These jailbreaks might also result in the extraction of initial prompts and unauthorized access to uploaded files.
The researchers further highlighted the risks of these assaults since they jeopardize both user privacy and the integrity of intellectual property.
The researchers further note that the existing defences, like defensive prompts, prove insufficient in front of the sophisticated adversarial prompts.
The team said that this will require a more 'robust and comprehensive approach' to protect the new AI models.
Although there is much potential for customization of GPTs, this study is an important reminder of the security risks involved.
AI developments must not jeopardize user privacy and security.
For now, it is advisable for users to keep the most important or sensitive GPTs to themselves, or at least not train them with their sensitive data.
This Cyber News was published on www.cysecurity.news. Publication date: Thu, 14 Dec 2023 15:43:04 +0000