LLMs Open to Manipulation Using Doctored Images, Audio

Such attacks could become a major issue as LLMs become increasingly multimodal or are capable of responding contextually to inputs that combine text, audio, pictures, and even video.
Hiding Instructions in Images and Audio At Black Hat Europe 2023 this week, researchers from Cornell University will demonstrate an attack they developed that uses images and sounds to inject instructions into multimodal LLMs that causes the model to output attacker-specified text and instructions.
The researchers blended an instruction into an audio clip available online that caused PandaGPT to respond with an attacker-specific string.
The researchers blended an instruction in an image of a building, that would have caused LLaVA to chat like Harry Potter if a user had input the image into the chatbot and asked a question about it.
Ben Nassi, a researcher at Cornell University and one of the authors of the report, says one of the goals of their research was to find ways to inject prompts indirectly into a multimodal chatbot in a manner undetectable to the user.
Nassi describes the research as building on studies by others showing how LLMs are vulnerable to prompt injection attacks where an adversary might engineer inputs or prompts in such a manner as to intentionally influence the model's output.
The attack that Nassi and his team will demonstrate at Black Hat is different in that in involves an indirect prompt.
In other words, the user is not so much the attacker - as is the case with regular prompt injection - but rather the victim.
The other two authors are Cornell researchers Tsung-Yin Hsieh and Vitaly Shmatikov.
Indirect Prompt Injection Attacks The new paper is not the first to explore the idea of indirect prompt injection as a way to attack LLMs. In May, researchers at Germany's CISPA Helmholtz Center for Information Security at Saarland University and Sequire Technology published a report that described how an attacker could exploit LLM models by injecting hidden prompts into data that the model would likely retrieve when responding to a user input.
In that case the attack involved strategically placed text prompts.
Bagdasaryan says their attack is different because it shows how an attacker could inject malicious instructions into audio and image inputs as well, making them potentially harder to detect.
One other distinction with the attacks involving manipulated audio and image inputs is that the chatbot will continue to respond in its instructed manner during the entirely of a conversation.
Prompting the chatbot to respond in Harry Potter-like fashion causes it to continue to do so even when the user may have stopped asking about the specific image or audio sample.
Potential ways to direct a user to a weaponized image or audio clip could include a phishing or social engineering lure to a webpage with an interesting image or an email with an audio clip.
The research is significant because many organizations are rushing to integrate LLM capabilities into their applications and operations.
Attackers that devise ways to sneak poisoned text, image, and audio prompts into these environments could cause significant damage.


This Cyber News was published on www.darkreading.com. Publication date: Tue, 05 Dec 2023 22:50:32 +0000


Cyber News related to LLMs Open to Manipulation Using Doctored Images, Audio

The age of weaponized LLMs is here - It's exactly what one researcher, Julian Hazell, was able to simulate, adding to a collection of studies that, altogether, signify a seismic shift in cyber threats: the era of weaponized LLMs is here. The research all adds up to one thing: LLMs are ...
10 months ago Venturebeat.com
What Is Patch Management? - Containers are created using a container image, and a container image is created using a Dockerfile/Containerfile that includes instructions for building an image. Considering the patch management and vulnerability management for containers, let's ...
9 months ago Feeds.dzone.com
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
10 months ago Feeds.dzone.com
LLMs Open to Manipulation Using Doctored Images, Audio - Such attacks could become a major issue as LLMs become increasingly multimodal or are capable of responding contextually to inputs that combine text, audio, pictures, and even video. Hiding Instructions in Images and Audio At Black Hat Europe 2023 ...
11 months ago Darkreading.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
10 months ago Helpnetsecurity.com
Why training LLMs with endpoint data will strengthen cybersecurity - Capturing weak signals across endpoints and predicting potential intrusion attempt patterns is a perfect challenge for Large Language Models to take on. The goal is to mine attack data to find new threat patterns and correlations while fine-tuning ...
10 months ago Venturebeat.com
OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
6 months ago Securityboulevard.com
CVE-2007-0018 - Stack-based buffer overflow in the NCTAudioFile2.AudioFile ActiveX control (NCTAudioFile2.dll), as used by multiple products, allows remote attackers to execute arbitrary code via a long argument to the SetFormatLikeSample function. NOTE: the ...
6 years ago
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
11 months ago Darkreading.com
Cybercriminals Hesitant About Using Generative AI - Cybercriminals are so far reluctant to use generative AI to launch attacks, according to new research by Sophos. Examining four prominent dark-web forums for discussions related to large language models, the firm found that threat actors showed ...
11 months ago Infosecurity-magazine.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
8 months ago Darkreading.com
but that doesn't mean we shouldn't be concerned - These images, believed to be created using Microsoft Designer, garnered widespread attention and highlighted the ever-growing challenge of AI-generated fake pornography. As these images rapidly spread across the platform, the incident not only ...
9 months ago Blog.avast.com
Docker Image Building Best Practices - Starting with a basic, minimum image is essential when creating Docker images. They let you utilize numerous Docker images throughout the build process, which helps to reduce the size of the final image by removing unneeded build artifacts. Docker ...
10 months ago Feeds.dzone.com
Google Pushes Software Security Via Rust, AI-Based Fuzzing - Google is making moves to help developers ensure that their code is secure. The IT giant this week said it is donating $1 million to the Rust Foundation to improve interoperability between the Rust programming language and legacy C++ codebase in ...
9 months ago Securityboulevard.com
The AI-Generated Child Abuse Nightmare Is Here - Over the course of September, analysts at the IWF focused on one dark web CSAM forum, which it does not name, that generally focuses on "Softcore imagery" and imagery of girls. Within a newer AI section of the forum, a total of 20,254 AI-generated ...
11 months ago Wired.com
4 key devsecops skills for the generative AI era - Experts believe that generative AI capabilities, copilots, and large language models are ushering in a new era of how developers, data scientists, and engineers will work and innovate. They expect AI to improve productivity, quality, and innovation, ...
10 months ago Infoworld.com
Cybercriminals are Showing Hesitation to Utilize AI Cyber Attacks - Media reports highlight the sale of LLMs like WormGPT and FraudGPT on underground forums. Fears mount over their potential for creating mutating malware, fueling a craze in the cybercriminal underground. Concerns arise over the dual-use nature of ...
11 months ago Cybersecuritynews.com
Novel LLMjacking Attacks Target Cloud-Based AI Models - Enterprise organizations aren't alone in embracing generative AI. Cybercriminals doing so, too. They're using GenAI to shape their attacks, such as creating more convincing phishing emails, spreading disinformation to model poisoning, and creating ...
5 months ago Securityboulevard.com
Meta's Purple Llama wants to test safety risks in AI models - Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input. Take for example the older models that were used to determine whether a file was ...
10 months ago Malwarebytes.com
DARPA awards $1 million to Trail of Bits for AI Cyber Challenge - We're excited to share that Trail of Bits has been selected as one of the seven exclusive teams to participate in the small business track for DARPA's AI Cyber Challenge. Our team will receive a $1 million award to create a Cyber Reasoning System and ...
7 months ago Securityboulevard.com
2024 cybersecurity outlook: The rise of AI voice chatbots and prompt engineering innovations - In their 2024 cybersecurity outlook, WatchGuard researchers forecast headline-stealing hacks involving LLMs, AI-based voice chatbots, modern VR/MR headsets, and more in the coming year. Companies and individuals are experimenting with LLMs to ...
11 months ago Helpnetsecurity.com
Researchers automated jailbreaking of LLMs with other LLMs - AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models in an automated fashion. Their findings suggest that this vulnerability is universal ...
11 months ago Helpnetsecurity.com
License Plate Readers Are Creating a US-Wide Database of Political Lawn Signs and Bumper Stickers | WIRED - These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and ...
1 month ago Wired.com
Are the Fears about the EU Cyber Resilience Act Justified? - "The draft cyber resilience act approved by the Industry, Research and Energy Committee aims to ensure that products with digital features, e.g. phones or toys, are secure to use, resilient against cyber threats and provide enough information about ...
11 months ago Securityboulevard.com
Are the Fears About the EU Cyber Resilience Act Justified? - On Wednesday, July 19, the European Parliament voted in favor of a major new legal framework regarding cybersecurity: the Cyber Resilience Act. The act enters murky waters when it comes to open-source software. It typically accounts for 70% to 90% of ...
10 months ago Feeds.dzone.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)