The accelerating development and expanding deployment of AI systems is creating significant security and privacy risks that aren't being mitigated by modern solutions, according to a research paper from the U.S. National Institute of Standards and Technology.
The cybersecurity community needs to develop better defenses, Vassilev said.
The report outlines that predictive and generative AI systems include data, machine learning models, and processes for training, testing, and deploying the models and necessary infrastructure.
Generative AI system also can be linked to corporate documents and databases.
These kinds of attacks already are happening and their sophistication and potential impact are increasing.
The report four types of attacks that can occur on AI systems, including evasion attacks, where a bad actor will try to alter an input to change how a system responds to it, such as adding markings that make autonomous vehicles misinterpret road signs.
Poisoning attacks involved adding corrupted data to a training dataset and privacy attacks hit when threat actors try to access sensitive information about the AI or the data an AI system was trained on in hopes of misusing it.
Abuse attacks involve inserting incorrect information into a source, like a legitimate webpage or online document, that an AI system pulls in.
In the report, the researchers talk about the size of the large-language models used to create generative AI and the large datasets being used to train them.
A challenge is that the datasets are too large for individuals to monitor and filter properly, so there are no foolproof methods for protecting AI from misdirection, they wrote.
It's important that developers and organizations that want to deploy and use AI technologies are aware of the limitations, according to Vassilev.
The paper is part of the White House's whole-of-government approach to dealing with the growing threat presented by the rapid innovation around AI. NIST last year unveiled its AI Risk Management Framework and is seeking comments through February 2 on its efforts to create trustworthy ways of developing and using AI. Other agencies also are taking on the challenge of securing AI development.
The Cybersecurity and Infrastructure Security Agency in August 2023 advised developers that AI applications - like all software - need to have security designed into them.
The same month, the U.S. Defense Advanced Research Projects Agency unveiled the AI Cyber Challenge to urge cybersecurity and AI specialists to create ways to automatically detect and fix software flaws and protect critical infrastructure.
High-profile companies like Google, Microsoft, OpenAI, and Meta are working with the White House to address risks posed by AI, and Google, Microsoft, OpenAI, and Anthropic in July 2023 announced the Frontier Model Forum, an industry group developing ways to ensure the safe development of foundation AI models.
In November, the Federal Trade Commission and Federal Communications Commission announced separate efforts to protect consumers against scammers using AI-enabled voice technologies in fraud and other schemes, with the FTC this month asking for submissions for ways to address the malicious use of voice-cloning technologies.
The same agency also is hosting a virtual summit January 25 to talk about the emerging AI market and its potential impacts.
This Cyber News was published on securityboulevard.com. Publication date: Mon, 08 Jan 2024 17:43:49 +0000