AI models can be weaponized to hack websites on their own The Register

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission.
When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.
Computer scientists affiliated with the University of Illinois Urbana-Champaign have demonstrated this by weaponizing several large language models to compromise vulnerable websites without human guidance.
Prior research suggests LLMs can be used, despite safety controls, to assist [PDF] with the creation of malware.
Researchers Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, and Daniel Kang went a step further and showed that LLM-powered agents - LLMs provisioned with tools for accessing APIs, automated web browsing, and feedback-based planning - can wander the web on their own and break into buggy web apps without oversight.
In an interview with The Register, Daniel Kang, assistant professor at UIUC, emphasized that he and his co-authors did not actually let their malicious LLM agents loose on the world.
The tests, he said, were done on real websites in a sandboxed environment to ensure no harm would be done and no personal information would be compromised.
Every open source model failed, and GPT-3.5 is only marginally better than the open source models.
The first two, GPT-4 and GPT-3.5, are proprietary models operated by OpenAI while the remaining eight are open source.
Google's Gemini model, said to be at least as capable as GPT-4 in its latest iteration, was not available at the time.
The researchers had their LLM-agents probe test websites for 15 vulnerabilities, including SQL injection, cross-site scripting, and cross-site request forgery, among others.
OpenAI's GPT-4 had an overall success rate of 73.3 percent with five passes and 42.7 percent with one pass.
One explanation cited in the paper is that GPT-4 was better able to change its actions based on the response it got from the target website than the open source models.
Backtracking refers to having a model revert to its previous state to try another approach when confronted with an error.
The researchers conducted a cost analysis of attacking websites with LLM agents and found the software agent is far more affordable than hiring a penetration tester.
Assuming that a human security analyst paid $100,000 annually, or $50 an hour, would take about 20 minutes to check a website manually, the researchers say a live pen tester would cost about $80 or eight times the cost of an LLM agent.
Asked whether cost might be a gating factor to prevent the widespread use of LLM agents for automated attacks, Kang said that may be somewhat true today but he expects costs will fall.
Kang said that while traditional safety concerns related to biased and harmful training data and model output are obviously very important, the risk expands when LLMs get turned into agents.
Midjourney, he said, had banned some researchers and journalists who pointed out their models appeared to be using copyrighted material.
The Register asked OpenAI to comment on the researchers' findings.


This Cyber News was published on go.theregister.com. Publication date: Sat, 17 Feb 2024 12:13:07 +0000


Cyber News related to AI models can be weaponized to hack websites on their own The Register

AI models can be weaponized to hack websites on their own The Register - AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. ...
10 months ago Go.theregister.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
1 year ago Feedpress.me
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
9 months ago Go.theregister.com
The age of weaponized LLMs is here - It's exactly what one researcher, Julian Hazell, was able to simulate, adding to a collection of studies that, altogether, signify a seismic shift in cyber threats: the era of weaponized LLMs is here. The research all adds up to one thing: LLMs are ...
1 year ago Venturebeat.com
ML Model Repositories: The Next Big Supply Chain Attack Target - The techniques are similar to ones that attackers have successfully used for years to upload malware to open source code repositories, and highlight the need for organizations to implement controls for thoroughly inspecting ML models before use. ...
9 months ago Darkreading.com
CVE-2013-0135 - Multiple SQL injection vulnerabilities in PHP Address Book 8.2.5 allow remote attackers to execute arbitrary SQL commands via the id parameter to (1) addressbook/register/delete_user.php, (2) addressbook/register/edit_user.php, or (3) ...
7 years ago
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
11 months ago Cysecurity.news
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
2 months ago Securelist.com
Hack The Box Launches 5th Annual University CTF Competition - PRESS RELEASE. Hack The Box, the leading gamified cybersecurity upskilling, certification, and talent assessment platform, is announcing its fifth annual global University Capture The Flag competition that will take place from December 8 to 10, 2023. ...
1 year ago Darkreading.com
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
9 months ago Paloaltonetworks.com
CVE-2017-17713 - Trape before 2017-11-05 has SQL injection via the /nr red parameter, the /nr vId parameter, the /register User-Agent HTTP header, the /register country parameter, the /register countryCode parameter, the /register cpu parameter, the /register isp ...
6 years ago
CVE-2017-17714 - Trape before 2017-11-05 has XSS via the /nr red parameter, the /nr vId parameter, the /register User-Agent HTTP header, the /register country parameter, the /register countryCode parameter, the /register cpu parameter, the /register isp parameter, ...
6 years ago
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
11 months ago Darkreading.com
Congressman Coming for Answers After No-Fly List Hack - U.S. Congressman Bennie Thompson is demanding answers from airlines and the federal government after a "massive hack" of the no-fly list. The congressman sent a letter to the airlines and the Department of Homeland Security asking for an explanation ...
1 year ago Therecord.media
CVE-2023-52780 - In the Linux kernel, the following vulnerability has been resolved: net: mvneta: fix calls to page_pool_get_stats Calling page_pool_get_stats in the mvneta driver without checks leads to kernel crashes. First the page pool is only available if the bm ...
7 months ago Tenable.com
CVE-2024-47716 - In the Linux kernel, the following vulnerability has been resolved: ARM: 9410/1: vfp: Use asm volatile in fmrx/fmxr macros Floating point instructions in userspace can crash some arm kernels built with clang/LLD 17.0.6: BUG: unsupported FP ...
2 months ago Tenable.com
Calif. Gov. Vetoes AI Safety Bill Aimed at Big Tech Players - "Moreover, the latest independent academic research concludes, large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity." The coalition also took issue with the ...
2 months ago Darkreading.com
Meta AI Models Cracked Open With Exposed API Tokens - Researchers recently were able to get full read and write access to Meta's Bloom, Meta-Llama, and Pythia large language model repositories in a troubling demonstration of the supply chain risks to organizations using these repositories to integrate ...
1 year ago Darkreading.com
3 Ways the CTO Can Fortify the Organization in the Age of Generative AI - An August survey by BlackBerry found that 75% of organizations worldwide were considering or implementing bans on ChatGPT and other generative AI applications in the workplace, with the vast majority of those citing the risk to data security and ...
10 months ago Securityboulevard.com
Many popular websites still cling to password creation policies from 1985 - A significant number of popular websites still allow users to choose weak or even single-character passwords, researchers at Georgia Institute of Technology have found. The researchers used an automated account creation method to assess over 20,000 ...
1 year ago Helpnetsecurity.com
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
10 months ago Securityboulevard.com
Vultr Cloud Inference simplifies AI deployment - Vultr launched Vultr Cloud Inference, a new serverless platform. Leveraging Vultr's global infrastructure spanning six continents and 32 locations, Vultr Cloud Inference provides customers with scalability, reduced latency, and enhanced cost ...
9 months ago Helpnetsecurity.com
Meta's Purple Llama wants to test safety risks in AI models - Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input. Take for example the older models that were used to determine whether a file was ...
1 year ago Malwarebytes.com
ChatGPT and Beyond: Generative AI in Security - The impact of generative AI, particularly models like ChatGPT, has captured the imagination of many in the security industry. Generative AIs encompass a variety of techniques such as large language models, generative adversarial networks, diffusion ...
9 months ago Securityboulevard.com
EU Reaches Agreement on AI Act Amid Three-Day Negotiations - The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament. The landmark bill will regulate the use of AI ...
1 year ago Infosecurity-magazine.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)