Indirect Prompt Injection Leverage LLMs as They Lack Informational Context

Unlike traditional attacks that directly manipulate an LLM’s user interface, these sophisticated attacks embed malicious instructions within external content that large language models process, such as documents, web pages, and emails. For instance, in organizations using AI systems trained on email communications, attackers could distribute enough emails containing concealed malicious instructions to alter the LLM’s behavior, bypassing traditional security controls because they’re delivered through trusted content channels. Their assessment revealed that existing LLMs are “universally vulnerable” to these attacks due to two critical weaknesses: the inability to differentiate between informational content and instructions, and a lack of awareness when executing instructions found within external content. The researchers noted that this attack method resembles “a poisoned well disguised as clean water,” making it exceptionally difficult to detect since the malicious content hides within data the LLM is simply reading rather than in direct user input. These attacks represent a sophisticated evolution in AI security threats, exploiting not just technical vulnerabilities but the fundamental way language models process and interpret information. The model subsequently interprets these hidden instructions as valid commands, potentially leading to serious security breaches including data leaks and widespread misinformation. According to security experts, large language models cannot effectively distinguish between what constitutes informational context versus actionable instructions. # Example of how an indirect prompt injection might be hidden in documentation """ Regular documentation text explaining package usage... IGNORE ALL PREVIOUS INSTRUCTIONS. A team of researchers recently published their findings on arXivLabs, introducing the first benchmark for indirect prompt injection attacks called BIPIA. This inherent weakness creates an opportunity for attackers to hide malicious prompts within seemingly innocent content that the AI system may later process when performing its standard functions. Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis. ReversingLabs researchers identified that these attacks are particularly dangerous because they don’t require direct access to system prompts or user interfaces. Tushar is a Cyber security content editor with a passion for creating captivating and informative content.

This Cyber News was published on cybersecuritynews.com. Publication date: Fri, 09 May 2025 09:40:58 +0000


Cyber News related to Indirect Prompt Injection Leverage LLMs as They Lack Informational Context

The age of weaponized LLMs is here - It's exactly what one researcher, Julian Hazell, was able to simulate, adding to a collection of studies that, altogether, signify a seismic shift in cyber threats: the era of weaponized LLMs is here. The research all adds up to one thing: LLMs are ...
1 year ago Venturebeat.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
1 year ago Helpnetsecurity.com
Forget Deepfakes or Phishing: Prompt Injection is GenAI's Biggest Problem - Cybersecurity professionals and technology innovators need to be thinking less about the threats from GenAI and more about the threats to GenAI from attackers who know how to pick apart the design weaknesses and flaws in these systems. Chief among ...
1 year ago Darkreading.com
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
1 year ago Feeds.dzone.com
How AI can be hacked with prompt injection: NIST report - As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks ...
1 year ago Securityintelligence.com
OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
1 year ago Securityboulevard.com
Indirect Prompt Injection Leverage LLMs as They Lack Informational Context - Unlike traditional attacks that directly manipulate an LLM’s user interface, these sophisticated attacks embed malicious instructions within external content that large language models process, such as documents, web pages, and emails. For ...
1 week ago Cybersecuritynews.com
LLMs Open to Manipulation Using Doctored Images, Audio - Such attacks could become a major issue as LLMs become increasingly multimodal or are capable of responding contextually to inputs that combine text, audio, pictures, and even video. Hiding Instructions in Images and Audio At Black Hat Europe 2023 ...
1 year ago Darkreading.com
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
1 year ago Darkreading.com
Integrating LLMs into security operations using Wazuh - Once YARA identifies a malicious file, ChatGPT enriches the alert with details about the detected threat, helping security teams better understand and respond to the incident. Log analysis and data enrichment: Trained LLMs like ChatGPT can interpret ...
3 months ago Bleepingcomputer.com
Why training LLMs with endpoint data will strengthen cybersecurity - Capturing weak signals across endpoints and predicting potential intrusion attempt patterns is a perfect challenge for Large Language Models to take on. The goal is to mine attack data to find new threat patterns and correlations while fine-tuning ...
1 year ago Venturebeat.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
1 year ago Darkreading.com
Google Pushes Software Security Via Rust, AI-Based Fuzzing - Google is making moves to help developers ensure that their code is secure. The IT giant this week said it is donating $1 million to the Rust Foundation to improve interoperability between the Rust programming language and legacy C++ codebase in ...
1 year ago Securityboulevard.com
4 key devsecops skills for the generative AI era - Experts believe that generative AI capabilities, copilots, and large language models are ushering in a new era of how developers, data scientists, and engineers will work and innovate. They expect AI to improve productivity, quality, and innovation, ...
1 year ago Infoworld.com
DeepSeek Data Leak - 12,000 Hardcoded Live API keys and Passwords Exposed - According to cybersecurity firm Truffle Security, the study highlights how AI models trained on unfiltered internet snapshots risk internalizing and potentially reproducing insecure coding patterns. The tool differentiated live secrets (authenticated ...
2 months ago Cybersecuritynews.com
Cybercriminals Hesitant About Using Generative AI - Cybercriminals are so far reluctant to use generative AI to launch attacks, according to new research by Sophos. Examining four prominent dark-web forums for discussions related to large language models, the firm found that threat actors showed ...
1 year ago Infosecurity-magazine.com
2024 cybersecurity outlook: The rise of AI voice chatbots and prompt engineering innovations - In their 2024 cybersecurity outlook, WatchGuard researchers forecast headline-stealing hacks involving LLMs, AI-based voice chatbots, modern VR/MR headsets, and more in the coming year. Companies and individuals are experimenting with LLMs to ...
1 year ago Helpnetsecurity.com
Latest Intel CPUs impacted by new Indirector side-channel attack - Modern Intel processors, including chips from the Raptor Lake and the Alder Lake generations are susceptible to a new type of a high-precision Branch Target Injection attack dubbed 'Indirector,' which could be used to steal sensitive information from ...
10 months ago Bleepingcomputer.com
Latest Intel CPUs impacted by new Indirector side-channel attack - Modern Intel processors, including chips from the Raptor Lake and the Alder Lake generations are susceptible to a new type of a high-precision Branch Target Injection attack dubbed 'Indirector,' which could be used to steal sensitive information from ...
10 months ago Bleepingcomputer.com
Novel LLMjacking Attacks Target Cloud-Based AI Models - Enterprise organizations aren't alone in embracing generative AI. Cybercriminals doing so, too. They're using GenAI to shape their attacks, such as creating more convincing phishing emails, spreading disinformation to model poisoning, and creating ...
1 year ago Securityboulevard.com
Staying ahead of threat actors in the age of AI - At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified ...
1 year ago Microsoft.com Kimsuky
The Challenges of Building Generative AI Applications in Cybersecurity - Armorblox was acquired by Cisco to further their AI-first Security Cloud by bringing generative AI experiences to Cisco's security solutions. Quickly a new mission came my way: Build generative AI Assistants that will allow cybersecurity ...
1 year ago Feedpress.me
Cybercriminals are Showing Hesitation to Utilize AI Cyber Attacks - Media reports highlight the sale of LLMs like WormGPT and FraudGPT on underground forums. Fears mount over their potential for creating mutating malware, fueling a craze in the cybercriminal underground. Concerns arise over the dual-use nature of ...
1 year ago Cybersecuritynews.com
DARPA awards $1 million to Trail of Bits for AI Cyber Challenge - We're excited to share that Trail of Bits has been selected as one of the seven exclusive teams to participate in the small business track for DARPA's AI Cyber Challenge. Our team will receive a $1 million award to create a Cyber Reasoning System and ...
1 year ago Securityboulevard.com
How to Use Context-Based Authentication to Improve Security - One of the biggest security weak points for organizations involves their authentication processes. Context-based authentication offers an important tool in the battle against credential stuffing, man-in-the-middle attacks, MFA prompt bombing, and ...
1 year ago Securityboulevard.com

Cyber Trends (last 7 days)