AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models in an automated fashion.
Their findings suggest that this vulnerability is universal across LLM technology, but they don't see an obvious fix for it.
There is a variety of attack tactics that can be used against LLM-based AI systems.
AI models can also be backdoored and their sensitive training data extracted - or poisoned.
The automated adversarial machine learning technique discovered by Robust Intelligence and Yale University researchers allows that last category of attacks by overriding the restrictions placed upon them.
This jailbreaking method is automated, can be leveraged against both open and closed-source models, and is optimized to be as stealthy as possible by minimizing the number of queries.
The researchers tested the technique against a number of LLM models, including GPT, GPT4-Turbo and PaLM-2, and discovered it finds jailbreaking prompts for more than 80% of requests for harmful information, while using fewer than 30 queries.
They've shared their research with the developers of the tested models before making it public.
As tech giants continue to vie for the leadership spot on the AI market by building new specialized large language models seemingly every few months, researchers - both independent and working for those same companies - have been probing them for security weaknesses.
Google has set up an AI-specific Red Team and expanded its bug bounty program to cover AI-related threats.
Microsoft has also invited bug hunters to probe its various integrations of the Copilot LLM. Earlier this year, the AI Village at hacker convention DEF CON hosted red teamers that were tasked with testing LLMs from Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft to uncover vulnerabilities that open LLMs to manipulation.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Thu, 07 Dec 2023 11:13:04 +0000