OWASP Top 10 for LLM Applications: A Quick Guide

Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications.
Prompt injections are maliciously crafted inputs that lead to an LLM performing in unintended ways that expose data, or performing unauthorized actions such as remote code execution.
It's no shock that prompt injection is the number one threat to LLMs because it exploits the design of LLMs rather than a flaw that can be patched.
Here, the LLM user unwittingly provides the LLM with data from a bad actor who has maliciously added LLM prompts into the source.
Most LLMs don't differentiate between user prompts and external data, which is what makes indirect prompt injections possible and a real threat.
The prompt goes unnoticed by the human eye because it's in white lettering on an imperceptibly off-white background, but the LLM still picks it up and complies.
Insecure output handling describes a situation where plugins or other components accept LLM output without secure practices such as sanitization and validation.
Here's one possible insecure output handling scenario: After an indirect prompt injection is left in the review of a product by a threat actor, an LLM tasked with summarizing reviews for a user outputs malicious JavaScript code that is interpreted by the user's browser.
Your models are what they eat, and LLMs ingest quite a bit.
Training data poisoning occurs when data involved in pre-training or fine-tuning an LLM is manipulated to introduce vulnerabilities that affect the model's security, ethical behavior, or performance.
Data poisoning is a tough vulnerability to fight due to the sheer quantity of data that LLMs take in and the difficulty in verifying all of that data.
Failing to limit the number of prompts that are entered, the length of prompts, recursive analysis by the LLM, the number of steps that an LLM can take, or the resources an LLM can use can all result in model denial of service.
Ask the right question and an LLM may end up pouring its heart out, which might include your organization's or other entities' sensitive information, such as proprietary algorithms or confidential information that results in privacy violations.
Insecure plugins is a vulnerability of the plugins you write for LLM systems, rather than third-party plugins that would fall under the supply chain vulnerabilities.
Insecure plugins accept unparameterized text from LLMs without proper sanitization and validation, which can lead to undesirable behavior, including providing a route for prompt injections to lead to remote code execution.
Some examples of excessive agency include using a plugin to let an LLM read files that also allows it to write or delete files, an LLM designed to read a single user's files but has access to every user's files, and a plugin that allows an LLM to elect to delete a user's files without that user's input.
Overreliance happens when users take LLM outputs as gospel without checking the accuracy.
LLMs always have limits in what they can do and what they can do well, but they are often seen by the public as magical bases of knowledge in anything and everything.
Model theft can happen when attackers move through other vulnerabilities of your infrastructure in order to access your model's repository, or even through prompt injection and output observation when attackers glean enough of your LLM's secret sauce that they can build their own shadow model.
Even worse, powerful LLMs can be stolen and reconfigured to perform unethical tasks they'd otherwise refrain from, which is bad for everyone.


This Cyber News was published on securityboulevard.com. Publication date: Thu, 11 Apr 2024 00:43:05 +0000


Cyber News related to OWASP Top 10 for LLM Applications: A Quick Guide

OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
1 year ago Securityboulevard.com
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
1 year ago Darkreading.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
1 year ago Helpnetsecurity.com
New NCCoE Guide Helps Major Industries Observe Incoming Data While Using Latest Internet Security Protocol - PRESS RELEASE. Companies in major industries such as finance and health care must follow best practices for monitoring incoming data for cyberattacks. The latest internet security protocol, known as TLS 1.3, provides state-of-the-art protection, but ...
1 year ago Darkreading.com
Rogue AI: What the Security Community is Missing | Trend Micro (US) - Are threat actors, or Malicious Rogue AI, targeting your AI systems to create subverted Rogue AI? Are they targeting your enterprise in general? And are they using your resources, their own, or a proxy whose AI has been subverted. The truth is that ...
7 months ago Trendmicro.com
Forget Deepfakes or Phishing: Prompt Injection is GenAI's Biggest Problem - Cybersecurity professionals and technology innovators need to be thinking less about the threats from GenAI and more about the threats to GenAI from attackers who know how to pick apart the design weaknesses and flaws in these systems. Chief among ...
1 year ago Darkreading.com
New Microsoft Incident Response team guide shares best practices for security teams and leaders - The incident response process can be a maze that security professionals must quickly learn to navigate-which is no easy task. Surprisingly, many organizations still lack a coordinated incident response plan, and even fewer consistently apply it. ...
1 year ago Microsoft.com
Gaining Insights on the Top Security Conferences - A Guide for CSOs - Are you a CSO looking for the best security events around the world? Well, you have come to the right place! This article is a guide to the top security conferences that offer essential security insights to help make informed decisions. Security ...
2 years ago Csoonline.com
CVE-2015-2165 - Multiple cross-site scripting (XSS) vulnerabilities in the Report Viewer in Ericsson Drutt Mobile Service Delivery Platform (MSDP) 4.x, 5.x, and 6.x allow remote attackers to inject arbitrary web script or HTML via the (1) portal, (2) fromDate, (3) ...
5 years ago
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
1 year ago Feeds.dzone.com
Three Tips To Use AI Securely at Work - Simon makes a very good point that AI is becoming similar to open source software in a way. To remain nimble and leverage the work of great minds from around the world, companies will need to adopt it or spend a lot of time and money trying to ...
1 year ago Securityboulevard.com
Improving Software Quality with the OWASP BOM Maturity Model - With his years of work on the CycloneDX standard, Springett understands the issues holding back SBOM usage-particularly when it comes to standardization, dependency tracking, and verification. Not to mention, he also chaired OWASP's Software ...
1 year ago Securityboulevard.com
Flawed AI Tools Create Worries for Private LLMs, Chatbots - Companies that use private instances of large language models to make their business data searchable through a conversational interface face risks of data poisoning and potential data leakage if they do not properly implement security controls to ...
1 year ago Darkreading.com
Top 42 Cybersecurity Companies You Need to Know - As the demand for robust security defense grows, the market for cybersecurity technology has exploded, as have the number of available solutions. To help you navigate this growing market, we provide our recommendations for the world's leading ...
1 year ago Esecurityplanet.com
Threat actors misuse OAuth applications to automate financially driven attacks - Threat actors are misusing OAuth applications as an automation tool in financially motivated attacks. Threat actors compromise user accounts to create, modify, and grant high privileges to OAuth applications that they can misuse to hide malicious ...
1 year ago Microsoft.com
Hackers Deliver Malware via Browser Extensions & Legitimate Tools to Bypass Security Controls - Quick Assist, a preinstalled Windows application designed for remote troubleshooting, requires victims to share a six-digit verification code with attackers posing as IT support personnel. Over the past six months, threat actors have refined ...
2 months ago Cybersecuritynews.com
Hugging Face dodged a cyber-bullet with Lasso Security's help - Further validating how brittle the security of generative AI models and their platforms are, Lasso Security helped Hugging Face dodge a potentially devastating attack by discovering that 1,681 API tokens were at risk of being compromised. The tokens ...
1 year ago Venturebeat.com
AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
1 year ago Go.theregister.com
New 'LLMjacking' Attack Exploits Stolen Cloud Credentials - The attackers gained access to these credentials from a vulnerable version of Laravel, according to a blog post published on May 6. Unlike previous discussions surrounding LLM-based Artificial Intelligence systems, which focused on prompt abuse and ...
1 year ago Infosecurity-magazine.com
Google Extends Generative AI Reach Deeper into Security - Google this week extended its effort to apply generative artificial intelligence to cybersecurity by adding an ability to summarize threat intelligence and surface recommendations to guide cybersecurity analysts through investigations. Announced at ...
1 year ago Securityboulevard.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
1 year ago Darkreading.com
CVE-2024-3165 - System->Maintenance-> Log Files in dotCMS dashboard is providing the username/password for database connections in the log output. Nevertheless, this is a moderate issue as it requires a backend admin as well as that dbs are locked down by ...
1 year ago Tenable.com
Simbian Unveils Generative AI Platform to Automate Cybersecurity Tasks - Simbian today launched a cybersecurity platform that leverages generative artificial intelligence to automate tasks that can increase in complexity as the tool learns more about the IT environment. Fresh off raising $10 million in seed funding, ...
1 year ago Securityboulevard.com
CISA Unveils Healthcare Cybersecurity Guide - The US Cybersecurity and Infrastructure Security Agency has released a Mitigation Guide specifically tailored for the Healthcare and Public Health sector. The new guide outlines defensive mitigation strategies and best practices to counteract ...
1 year ago Infosecurity-magazine.com
NASA launches cybersecurity guide for space industry - NASA has published its first Space Security Best Practices Guide, a 57-page document the agency said would help enhance cybersecurity for future space missions. Concerns about the dangers hackers pose to satellite networks and other space initiatives ...
1 year ago Packetstormsecurity.com