It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption.
AI's largest attack surface involves its foundational models, such as Meta's Llama, or those produced by giants like Nvidia, OpenAI, Microsoft, etc.
The overwhelming majority of today's machine learning development involves reusing these foundational models.
At least at the moment, building bespoke models from scratch has proven too expensive.
Instead, engineers tune foundational models, train them on additional data, and blend these models into traditional software development.
Foundational models have all the existing vulnerabilities of the software supply chain, plus AI's new mathematical threats.
One can change even a single pixel in an image and induce different model outputs.
Despite patching, there will always be ways to change inputs to attack foundational models.
It's not easy to patch all the known vulnerabilities in a model.
Thousands of academic papers describe adversarial AI attacks on deployed production models, as does the MITRE Atlas framework.
Adversarial AI wielded against models in production environments has caught the public's attention.
Consider that potential victims may throttle model queries so low that there aren't enough interactions for the attacks in MITRE Atlas to even work.
It secures bespoke model development, training data, and analyze foundational models for vulnerabilities.
A further debate is driven by Adversa AI and Calypso AI, which are both skeptical that foundational models can ever be secured.
Adversa AI automates foundational model pen testing and validation, along with red-team services.
Calypso AI focuses on scoring vulnerabilities at the point of model prompts and their responses, either logging or blocking.
Startups Got Realistic About Fully Homomorphic Encryption FHE is quite different than the all-or-nothing encryption of old.
While still encrypted, FHE can be productively used by many ML algorithms, neural networks, and even large language models.
Two smaller FHE startups also received strategic investments in 2023.
Only a small number of innovators at early growth startups have coherent visions of AI security.
This Cyber News was published on www.darkreading.com. Publication date: Tue, 02 Jan 2024 15:05:25 +0000