A report published by Protect AI today identifies remote code execution vulnerabilities in an open source MLflow life cycle management tool that can be used to compromise artificial intelligence models.
Specifically, the report finds MLflow, which is widely used to build AI models, had an RCE vulnerability in the code used to pull down remote data storage.
A bypass of an MLflow function, which validates that a file path is safe, was discovered.
It allows a malicious user to remotely overwrite files on the MLflow server.
Finally, MLflow hosted on certain types of operating systems could be manipulated into displaying the file contents of sensitive files via a file path safety bypass.
There is potential for system takeover if SSH keys or cloud keys were stored on the server and if MLflow was started with permissions to read them, the report noted.
Protect AI notified the maintainers of MLflow of the issue, and the latest version of the platform remediates the issue.
Protect AI president Daryan Dehghanpisheh said these vulnerabilities are the latest in an ongoing series that have been discovered via a bug bounty program the company manages.
In general, securing AI models requires organizations adopt tools and MLSecOps best practices to ensure security.
In addition to attempting to take over an AI model, cybercriminals are also trying to steal AI models that are among an organization's most valuable intellectual property, he noted.
At the same time, cybercriminals are attempting to inject malware into the software components used to create AI models in addition to poisoning the data sources used to train an AI model in the hopes of forcing it to hallucinate in a way that destroys its business value.
Those threats are especially problematic because, unlike other software artifacts, there is no way to patch an AI model.
Instead, organizations incur the expense of having to retrain the entire AI model any time an issue is discovered, said Dehghanpisheh.
AI models are, by definition, among the most brittle types of software artifacts an organization can deploy, he added.
On the plus side, thanks to safety concerns, more attention is being paid to AI security much earlier than other types of emerging technologies, he added.
Many organizations are already appointing AI security chiefs who are being allocated specific budgets to ensure the integrity of AI models, he noted.
It's not clear to what degree AI models are being compromised or outright stolen, but most of them are less secure than many organizations realize.
AI models, much like any other software artifact, are built using software components that have inherent vulnerabilities.
In effect, they are insecure by design because of the dependencies that exist throughout the AI model supply chain, noted Dehghanpisheh.
Each organization that embraces AI will eventually need to address cybersecurity concerns.
This Cyber News was published on securityboulevard.com. Publication date: Thu, 18 Jan 2024 16:13:05 +0000