Multi-turn attacks on LLM models raise security concerns

Large Language Models (LLMs) are increasingly targeted by sophisticated multi-turn attacks that exploit their conversational nature to bypass security measures. These attacks involve a series of interactions where attackers manipulate the model's responses over multiple turns to extract sensitive information or induce harmful outputs. The complexity of these attacks poses significant challenges for cybersecurity professionals aiming to protect AI-driven systems. This article explores the mechanisms behind multi-turn attacks on LLMs, the potential risks they introduce, and the strategies being developed to mitigate these threats. Understanding these attack vectors is crucial for organizations deploying LLMs to ensure robust defenses against evolving AI threats. The discussion also highlights the importance of continuous monitoring and adaptive security frameworks to safeguard AI applications in various sectors.

This Cyber News was published on www.infosecurity-magazine.com. Publication date: Thu, 06 Nov 2025 15:00:03 +0000


Cyber News related to Multi-turn attacks on LLM models raise security concerns

OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
1 year ago Securityboulevard.com
25 Best Managed Security Service Providers (MSSP) - 2025 - Pros & Cons: ProsConsStrong threat intelligence & expert SOCs.High pricing for SMBs.24/7 monitoring & rapid incident response.Complex UI and steep learning curve.Flexible, scalable, hybrid deployments.Limited visibility into endpoint ...
4 months ago Cybersecuritynews.com
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
1 year ago Darkreading.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
1 year ago Feedpress.me
AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
1 year ago Go.theregister.com
9 Best DDoS Protection Service Providers for 2024 - eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More. One of the most powerful defenses an organization can employ against distributed ...
1 year ago Esecurityplanet.com
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
1 year ago Feeds.dzone.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
1 year ago Darkreading.com
Hugging Face dodged a cyber-bullet with Lasso Security's help - Further validating how brittle the security of generative AI models and their platforms are, Lasso Security helped Hugging Face dodge a potentially devastating attack by discovering that 1,681 API tokens were at risk of being compromised. The tokens ...
1 year ago Venturebeat.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
1 year ago Helpnetsecurity.com
Forget Deepfakes or Phishing: Prompt Injection is GenAI's Biggest Problem - Cybersecurity professionals and technology innovators need to be thinking less about the threats from GenAI and more about the threats to GenAI from attackers who know how to pick apart the design weaknesses and flaws in these systems. Chief among ...
1 year ago Darkreading.com
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
1 year ago Go.theregister.com Hunters
Meta AI Models Cracked Open With Exposed API Tokens - Researchers recently were able to get full read and write access to Meta's Bloom, Meta-Llama, and Pythia large language model repositories in a troubling demonstration of the supply chain risks to organizations using these repositories to integrate ...
1 year ago Darkreading.com
Flawed AI Tools Create Worries for Private LLMs, Chatbots - Companies that use private instances of large language models to make their business data searchable through a conversational interface face risks of data poisoning and potential data leakage if they do not properly implement security controls to ...
1 year ago Darkreading.com
Multi-turn attacks on LLM models raise security concerns - Large Language Models (LLMs) are increasingly targeted by sophisticated multi-turn attacks that exploit their conversational nature to bypass security measures. These attacks involve a series of interactions where attackers manipulate the model's ...
6 days ago Infosecurity-magazine.com
ChatGPT and Beyond: Generative AI in Security - The impact of generative AI, particularly models like ChatGPT, has captured the imagination of many in the security industry. Generative AIs encompass a variety of techniques such as large language models, generative adversarial networks, diffusion ...
1 year ago Securityboulevard.com
Researchers automated jailbreaking of LLMs with other LLMs - AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models in an automated fashion. Their findings suggest that this vulnerability is universal ...
1 year ago Helpnetsecurity.com Hunters
Three Tips To Use AI Securely at Work - Simon makes a very good point that AI is becoming similar to open source software in a way. To remain nimble and leverage the work of great minds from around the world, companies will need to adopt it or spend a lot of time and money trying to ...
1 year ago Securityboulevard.com
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
1 year ago Cysecurity.news
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
1 year ago Paloaltonetworks.com
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
1 year ago Darkreading.com
Top LLM vulnerabilities and how to mitigate the associated risk - As large language models become more prevalent, a comprehensive understanding of the LLM threat landscape remains elusive. While the AI threat landscape changes every day, there are a handful of LLM vulnerabilities that we know pose significant risk ...
1 year ago Helpnetsecurity.com
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
1 year ago Securelist.com
New 'LLMjacking' Attack Exploits Stolen Cloud Credentials - The attackers gained access to these credentials from a vulnerable version of Laravel, according to a blog post published on May 6. Unlike previous discussions surrounding LLM-based Artificial Intelligence systems, which focused on prompt abuse and ...
1 year ago Infosecurity-magazine.com
Surge in Cloud Threats Spikes Rapid Adoption of CNAPPs for Cloud-Native Security - CNAPPs integrate multiple previously separate technologies—including Cloud Security Posture Management (CSPM), Cloud Workload Protection Platforms (CWPP), Cloud Infrastructure Entitlement Management (CIEM), Kubernetes Security Posture Management ...
6 months ago Cybersecuritynews.com

Cyber Trends (last 7 days)