Flawed AI Tools Create Worries for Private LLMs, Chatbots

Companies that use private instances of large language models to make their business data searchable through a conversational interface face risks of data poisoning and potential data leakage if they do not properly implement security controls to harden the platforms, experts say.
Case in point: This week, Synopsys disclosed a cross-site request forgery flaw that affects applications based on the EmbedAI component created by AI provider SamurAI; it could allow attackers to fool users into uploading poisoned data into their language model, the application-security firm warned.
The attack exploits the open source component's lack of a safe cross-origin policy and failure to implement session management, and could allow an attacker to affect even a private LLM instance or chatbot, says Mohammed Alshehri, the Synopsys security researcher who found the vulnerability.
The research underscores that the rush to integrate AI into business processes does pose risks, especially for companies that are giving LLMs and other generative-AI applications access to large repositories of data.
Overall, only 4% of US companies have adopted AI as part of their business operations, but some industries have higher adoption rates, with the information sector at 14% and the professional services sector at 9%, according to a survey by the US Census Bureau conducted in October 2023.
The risks posed by the adoption of next-gen artificial intelligence and machine learning are not necessarily due to the models, which tend to have smaller attack surfaces, but the software components and tools for developing AI applications and interfaces, says Dan McInerney, lead AI threat researcher with Protect AI, an AI application security firm.
Practical Attacks Against AI Components Such vulnerabilities have already become actively exploited.
In March, Oligo Security reported on the discovery of active attacks against Ray, a popular AI framework, using a previously discovered security issue, one of five vulnerabilities that had been discovered by research groups at Protect AI and Bishop Fox, along with independent researcher Sierra Haex.
Anyscale, the company behind Ray, fixed four vulnerabilities but considered the fifth to be a misconfiguration issue.
Attackers managed to find hundreds of deployments that inadvisedly exposed a Ray server to the Internet and compromised the systems, according to an analysis published by Oligo Security in March.
In its own March advisory, Anyscale acknowledged the attacks and released a tool to detect insecurely configured systems.
Private Does Not Mean Safe While the vulnerability in the Ray framework exposed public-facing servers to attack, even private AI-powered LLMs and chatbots could face exploitations.
In May, AI-security firm Protect AI released the latest tranche of vulnerabilities discovered by its bug bounty community, Huntr, encompassing 32 issues from critical remote exploits to low-severity race conditions.
Some attacks may require access to the API, but others could be carried out through malicious documents and other vectors.
In its own research, Synopsys researcher Alshehri discovered the cross-site request forgery issue, which gives an attacker the ability to poison an LLM through a watering hole attack.
By using a private instance of a chatbot service or internally hosting an LLM, many companies believe they have minimized the risk of exploitation, says Tyler Young, CISO at BigID, a data management firm.
New Software, Same Old Vulnerabilities Companies need to assume that the current crop of AI systems and services have had only limited security design and review, because the platforms often are based on open-source components that have small teams and limited oversight, says Synopsys's Alshehri.
Companies that are implementing AI systems based on internal data should segment the data - and the resulting LLM instances - so that only employees are allowed access to just those LLM services that were built on the data to which they have access.
Each collection of users with a specific privilege level will require a separate LLM trained on their accessible data.
Finally, companies need to minimize the components they are using to develop their AI tools and then regularly update those software assets and implement controls to make exploitation more difficult, he says.


This Cyber News was published on www.darkreading.com. Publication date: Thu, 30 May 2024 19:55:31 +0000


Cyber News related to Flawed AI Tools Create Worries for Private LLMs, Chatbots

The age of weaponized LLMs is here - It's exactly what one researcher, Julian Hazell, was able to simulate, adding to a collection of studies that, altogether, signify a seismic shift in cyber threats: the era of weaponized LLMs is here. The research all adds up to one thing: LLMs are ...
11 months ago Venturebeat.com
Chatbots and Human Conversation - If you wanted results, you needed to learn the computer's language. Large language models-the technology undergirding modern chatbots-allow users to interact with computers through natural conversation, an innovation that introduces some baggage from ...
9 months ago Schneier.com
The impact of prompt injection in LLM agents - This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and ...
11 months ago Helpnetsecurity.com
Exploring the Security Risks of LLM - According to a recent survey, 74% of IT decision-makers have expressed concerns about the cybersecurity risks associated with LLMs, such as the potential for spreading misinformation. Security Concerns of LLMs While the potential applications of ...
10 months ago Feeds.dzone.com
Flawed AI Tools Create Worries for Private LLMs, Chatbots - Companies that use private instances of large language models to make their business data searchable through a conversational interface face risks of data poisoning and potential data leakage if they do not properly implement security controls to ...
5 months ago Darkreading.com
8 Tips on Leveraging AI Tools Without Compromising Security - Forecasts like the Nielsen Norman Group estimating that AI tools may improve an employee's productivity by 66% have companies everywhere wanting to leverage these tools immediately. How can companies employ these powerful AI/ML tools without ...
11 months ago Darkreading.com
Why training LLMs with endpoint data will strengthen cybersecurity - Capturing weak signals across endpoints and predicting potential intrusion attempt patterns is a perfect challenge for Large Language Models to take on. The goal is to mine attack data to find new threat patterns and correlations while fine-tuning ...
10 months ago Venturebeat.com
Chatbots: Transforming Tech, Creating Jobs, and Making Waves - Not too long ago, chatbots were seen as fun additions to customer service. They have evolved significantly with advancements in AI, machine learning, and natural language processing. A recent report suggests that the chatbot market is set for ...
10 months ago Cysecurity.news
Cybercriminals Hesitant About Using Generative AI - Cybercriminals are so far reluctant to use generative AI to launch attacks, according to new research by Sophos. Examining four prominent dark-web forums for discussions related to large language models, the firm found that threat actors showed ...
11 months ago Infosecurity-magazine.com
OWASP Top 10 for LLM Applications: A Quick Guide - Even still, the expertise and insights provided, including prevention and mitigation techniques, are highly valuable to anyone building or interfacing with LLM applications. Prompt injections are maliciously crafted inputs that lead to an LLM ...
7 months ago Securityboulevard.com
How the FTC Can Make the Internet Safe for Chatbots - No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI. Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of ...
4 months ago Eff.org
4 key devsecops skills for the generative AI era - Experts believe that generative AI capabilities, copilots, and large language models are ushering in a new era of how developers, data scientists, and engineers will work and innovate. They expect AI to improve productivity, quality, and innovation, ...
10 months ago Infoworld.com
2024 cybersecurity outlook: The rise of AI voice chatbots and prompt engineering innovations - In their 2024 cybersecurity outlook, WatchGuard researchers forecast headline-stealing hacks involving LLMs, AI-based voice chatbots, modern VR/MR headsets, and more in the coming year. Companies and individuals are experimenting with LLMs to ...
11 months ago Helpnetsecurity.com
Latest Information Security and Hacking Incidents - Private cloud providers may be among the primary winners of today's generative AI gold rush, as CIOs are reconsidering private clouds, whether on-premises or hosted by a partner, after previously dismissing them in favour of public clouds. At the ...
6 months ago Cysecurity.news
Researchers Show How to Use One LLM to Jailbreak Another - The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased content when prompted in specific ways. The latest ...
11 months ago Darkreading.com
Multi-Cloud vs. Hybrid Cloud: The Main Difference - The proliferation of cloud technologies is particularly confusing to businesses new to cloud adoption, and they're sometimes baffled by the distinction between multi-cloud and hybrid cloud. Although the public cloud infrastructure and public cloud ...
11 months ago Techtarget.com
The Dangers of Remote Management & Monitoring Tools for Cybersecurity - Remote monitoring and management (RMM) tools are used by business organizations to manage and monitor their enterprise IT infrastructure from a central location. However, the increasing sophistication of hackers and cybercriminals has caused both ...
1 year ago Csoonline.com
Cybercriminals are Showing Hesitation to Utilize AI Cyber Attacks - Media reports highlight the sale of LLMs like WormGPT and FraudGPT on underground forums. Fears mount over their potential for creating mutating malware, fueling a craze in the cybercriminal underground. Concerns arise over the dual-use nature of ...
11 months ago Cybersecuritynews.com
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
9 months ago Darkreading.com
The Impact of Artificial Intelligence on the Evolution of Cybercrime - The role of artificial intelligence in the realm of cybercrime has become increasingly prominent, with cybercriminals leveraging AI tools to execute successful attacks. Defenders in the cybersecurity field are actively combating these threats. As ...
10 months ago Cysecurity.news
7 Best Vulnerability Scanning Tools & Software - Vulnerability scanning tools scan assets to identify missing patches, misconfigurations, exposed application vulnerabilities, and other security issues to be remediated. To help you select the best fitting vulnerability scanning solution, we've ...
10 months ago Esecurityplanet.com
DARPA awards $1 million to Trail of Bits for AI Cyber Challenge - We're excited to share that Trail of Bits has been selected as one of the seven exclusive teams to participate in the small business track for DARPA's AI Cyber Challenge. Our team will receive a $1 million award to create a Cyber Reasoning System and ...
8 months ago Securityboulevard.com
Google Pushes Software Security Via Rust, AI-Based Fuzzing - Google is making moves to help developers ensure that their code is secure. The IT giant this week said it is donating $1 million to the Rust Foundation to improve interoperability between the Rust programming language and legacy C++ codebase in ...
9 months ago Securityboulevard.com
Put guardrails around AI use to protect your org, but be open to changes - Security professionals should regard AI in the same way as any other significant technology development. Generative AI tools such as ChatGPT are being used for rudimentary purposes, such as assisting scammers to create convincing phishing emails, but ...
11 months ago Helpnetsecurity.com
Novel LLMjacking Attacks Target Cloud-Based AI Models - Enterprise organizations aren't alone in embracing generative AI. Cybercriminals doing so, too. They're using GenAI to shape their attacks, such as creating more convincing phishing emails, spreading disinformation to model poisoning, and creating ...
6 months ago Securityboulevard.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)