Open Source AI Models: Big Risks for Malicious Code, Vulns

Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities. While the attacks appeared to be proofs-of-concept, their success in being hosted with a "No issue" tag shows that companies should not rely on Hugging Face's and other repositories' safety checks for their own security, says Tomislav Pericin, chief software architect at ReversingLabs. Attackers are finding more and more ways to post malicious projects to Hugging Face and other repositories for open source artificial intelligence (AI) models, while dodging the sites' security checks. Licensing is another issue: While pretrained AI models are frequently called "open source AI," they generally do not provide all the information needed to reproduce the AI model, such as code and training data. While Hugging Face has explicit checks for Pickle files, the malicious code discovered by ReversingLabs sidestepped those checks by using a different file compression for the data. Other research by application security firm Checkmarx found multiple ways to bypass the scanners, such as PickleScan used by Hugging Face, to detect dangerous Pickle files. "You kind of need to manage AI models like you would any other open source dependencies," Stiefel says. Companies are quickly adopting AI, and the majority are also establishing internal projects using open source AI models from repositories — such as Hugging Face, TensorFlow Hub, and PyTorch Hub. Despite vocal warnings from security researchers, the Pickle format continues to be used by many data scientists, says Tom Bonner, vice president of research at HiddenLayer, an AI-focused detection and response firm. Rather than Pickle files, data science and AI teams should move to Safetensors — a library for a new data format managed by Hugging Face, EleutherAI, and Stability AI — which has been audited for security. The threat actor used a common vector — data files using the Pickle format — with a new technique, dubbed "NullifAI," to evade detection. "PickleScan uses a blocklist which was successfully bypassed using both built-in Python dependencies," Dor Tumarkin, director of application security research at Checkmarx, stated in the analysis. In addition, they should pay attention to common signals of software safety, including the source of the model, development activity around the model, its popularity, and the operational and security risks, Endor's Stiefel says. Hugging Face's automated checks, for example, recently failed to detect malicious code in two AI models hosted on the repository, according to a Feb. "There is already research about what kind of prompts would trigger the model to behave in an unpredictable way, divulge confidential information, or teach things that could be harmful," he says. The escalating problem underscores the need for companies pursuing internal AI projects to have robust mechanisms to detect security flaws and malicious code within their supply chains. Overall, 61% of companies are using models from the open source ecosystem to create their own AI tools, according to a Morning Consult survey of 2,400 IT decision-makers sponsored by IBM. Despite having malicious features, this model passes security checks on Hugging Face. Yet many of the components can contain executable code, leading to a variety of security risks, such as code execution, backdoors, prompt injections, and alignment issues — the latter being how well an AI model matches the intent of the developers and users.

This Cyber News was published on www.darkreading.com. Publication date: Fri, 14 Feb 2025 15:05:03 +0000


Cyber News related to Open Source AI Models: Big Risks for Malicious Code, Vulns

Open Source AI Models: Big Risks for Malicious Code, Vulns - Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities. While the attacks appeared to be proofs-of-concept, their success in ...
1 week ago Darkreading.com
Open Source Password Managers: Overview, Pros & Cons - There are many proprietary password managers on the market for those who want an out-of-the box solution, and then there are open source password managers for those wanting a more customizable option. In this article, we explain how open source ...
11 months ago Techrepublic.com
Are the Fears about the EU Cyber Resilience Act Justified? - "The draft cyber resilience act approved by the Industry, Research and Energy Committee aims to ensure that products with digital features, e.g. phones or toys, are secure to use, resilient against cyber threats and provide enough information about ...
1 year ago Securityboulevard.com
Are the Fears About the EU Cyber Resilience Act Justified? - On Wednesday, July 19, the European Parliament voted in favor of a major new legal framework regarding cybersecurity: the Cyber Resilience Act. The act enters murky waters when it comes to open-source software. It typically accounts for 70% to 90% of ...
1 year ago Feeds.dzone.com
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
11 months ago Go.theregister.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
1 year ago Feedpress.me
Launching Your First Open Source Project - I've been deeply immersed in the world of developer products for the past decade, and let me tell you, I've been quite an open-source enthusiast. Over the years, I've had the pleasure of shepherding open-source projects of all shapes and sizes. ...
1 year ago Feeds.dzone.com
ML Model Repositories: The Next Big Supply Chain Attack Target - The techniques are similar to ones that attackers have successfully used for years to upload malware to open source code repositories, and highlight the need for organizations to implement controls for thoroughly inspecting ML models before use. ...
11 months ago Darkreading.com
F5 Developing Fix for BIG-IP Vulnerability That Could Cause Denial of Service and Allow for Code Execution - F5 has warned of a serious format string vulnerability in BIG-IP that could allow an authenticated attacker to cause a denial-of-service and potentially execute malicious code. This security issue, tracked as CVE-2023-22374, affects iControl SOAP, an ...
2 years ago Securityweek.com
Wazuh: Building robust cybersecurity architecture with open source tools - Building a cybersecurity architecture requires organizations to leverage several security tools to provide multi-layer security in an ever-changing threat landscape. Leveraging open source tools and solutions to build a cybersecurity architecture ...
1 year ago Bleepingcomputer.com
Wazuh: Building robust cybersecurity architecture with open source tools - Building a cybersecurity architecture requires organizations to leverage several security tools to provide multi-layer security in an ever-changing threat landscape. Leveraging open source tools and solutions to build a cybersecurity architecture ...
1 year ago Bleepingcomputer.com
Is an open-source AI vulnerability next? - Applications developed within open-source communities often face more significant security challenges because they are free and widely available, supported by volunteers, and because of other considerations. Even if a major open-source AI project ...
9 months ago Helpnetsecurity.com
How Servicenow Detects Open Source Security Vulnerabilities - Servicenow, a digital workflow company, recently announced their integration with Synk, an open source security platform, to detect security vulnerabilities in open source software. This integration will enable Servicenow customers to detect and ...
2 years ago Csoonline.com
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
1 year ago Cysecurity.news
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
1 year ago Securityboulevard.com
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
11 months ago Paloaltonetworks.com
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
4 months ago Securelist.com
Meta AI Models Cracked Open With Exposed API Tokens - Researchers recently were able to get full read and write access to Meta's Bloom, Meta-Llama, and Pythia large language model repositories in a troubling demonstration of the supply chain risks to organizations using these repositories to integrate ...
1 year ago Darkreading.com
Lost in Translation: Mitigating Cybersecurity Risks in Multilingual Environments - With increased connectivity and linguistic diversity comes a new set of cybersecurity risks. This article will delve into the unique cybersecurity challenges in multilingual environments, focusing on solutions and best practices to mitigate such ...
1 year ago Cyberdefensemagazine.com
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
1 year ago Darkreading.com
CVE Prioritizer: Open-source tool to prioritize vulnerability patching - CVE Prioritizer is an open-source tool designed to assist in prioritizing the patching of vulnerabilities. It integrates data from CVSS, EPSS, and CISA's KEV catalog to offer insights into the probability of exploitation and the potential effects of ...
1 year ago Helpnetsecurity.com
EU Reaches Agreement on AI Act Amid Three-Day Negotiations - The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament. The landmark bill will regulate the use of AI ...
1 year ago Infosecurity-magazine.com
Dotnet Source Generators in 2024 Part 1: Getting Started - Security Boulevard - While nice, this incurs an execution of any classes marked as a source generator every time something changes in the project (i.e., delete a line of code, add a line of code, make a new file, etc.). As you can imagine, having something running every ...
4 months ago Securityboulevard.com
SiCat: Open-source exploit finder - SiCat is an open-source tool for exploit research designed to source and compile information about exploits from open channels and internal databases. Its primary aim is to assist in cybersecurity, enabling users to search the internet for potential ...
1 year ago Helpnetsecurity.com
Product showcase: Apiiro unifies AppSec and SSCS in a deep ASPM - With the rapidly evolving threat landscape and complexity of interconnected applications, identifying real, business-critical application risks is more challenging than ever. Application security teams need a better solution than their current siloed ...
1 year ago Helpnetsecurity.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)