Protect AI Unveils Gateway to Secure AI Models

Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model.
Guardian is based on ModelScan, an open source tool from Protect AI that scans machine-learning models to determine if they contain unsafe code.
Guardian extends that capability to a gateway organizations can use to thwart, for example, a model serialization attack.
This occurs when code is added to the contents of a model during serialization, also known as saving.
Once added to a model, this malicious code can be executed to steal data and credentials or poison the data used to train the AI model.
Cybersecurity is one of the more challenging issues when it comes to AI-and it's an issue that isn't getting the attention it deserves.
Most data scientists don't have a lot of cybersecurity training, so the machine learning operations processes relied on to construct an AI model often lack any type of scan for vulnerabilities.
Protect AI's co-founder and CEO Ian Swanson said even when a scan is run, there's a tendency to rely on tools made available via an AI model repository such as Hugging Face.
Protect AI today released a report showing that its ModelScan tool was used to evaluate more than 400,000 model artifacts hosted on Hugging Face.
Protect AI reported it was able to find 3,354 models that use functions executing arbitrary code on model load or inference.
A full 1,347 of those models were not marked unsafe by the Hugging Face scanning tool.
That's an issue because it means malicious actors can potentially upload compromised models that inject code when a model thought to be secure is loaded or executed, said Swanson.
A recent report published by Protect AI identified remote code execution vulnerabilities in an open source MLflow life cycle management tool that can be used to compromise AI models.
Many teams are downloading models from public repositories for use in production environments.
Many of these models were built using code that has known vulnerabilities, noted Swanson.
If a vulnerability is discovered later the model, unlike other types of software artifacts, can't be patched to remediate the issue.
The entire model will need to be retrained, so in effect, many AI models are insecure by design, said Swanson.
The cost of retraining an AI model is high, so any time there is a cybersecurity event, it's likely to be much more costly than remediating any other type of software artifact.
The degree to which AI models are being compromised is not well known, but it's clear most of them are less secure than many organizations realize.
It's only a matter of time before organizations that embrace AI will eventually run into these cybersecurity issues.


This Cyber News was published on securityboulevard.com. Publication date: Wed, 24 Jan 2024 23:13:05 +0000


Cyber News related to Protect AI Unveils Gateway to Secure AI Models

Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
5 months ago Securityboulevard.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
6 months ago Feedpress.me
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
3 months ago Go.theregister.com
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
5 months ago Cysecurity.news
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
3 months ago Paloaltonetworks.com
Protect AI Report Surfaces MLflow Security Vulnerabilities - A report published by Protect AI today identifies remote code execution vulnerabilities in an open source MLflow life cycle management tool that can be used to compromise artificial intelligence models. Specifically, the report finds MLflow, which is ...
5 months ago Securityboulevard.com
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
6 months ago Darkreading.com
ML Model Repositories: The Next Big Supply Chain Attack Target - The techniques are similar to ones that attackers have successfully used for years to upload malware to open source code repositories, and highlight the need for organizations to implement controls for thoroughly inspecting ML models before use. ...
3 months ago Darkreading.com
How To Deploy HYAS Protect - HYAS Protect is an intelligent, cloud-based protective DNS solution that proactively detects and blocks communication with command and control infrastructure used in malware attacks. HYAS Protect also blocks communication with a host of other ...
1 month ago Securityboulevard.com
Secure Workload and Secure Firewall: The recipe for a robust zero trust cybersecurity strategy - You hear a lot about zero trust microsegmentation these days and rightly so. While a host-based enforcement approach is immensely powerful because it provides access to rich telemetry in terms of processes, packages, and CVEs running on the ...
6 months ago Feedpress.me
Zero Trust Security: How to Secure Critical Infrastructure - Zero trust security is a critical component of any organization's security strategy that enables organizations to protect their data and systems from malicious actors, cyber threats, and unauthorized access. With the ever-evolving cyber threats ...
1 year ago Csoonline.com
Secure Online Shopping: Tips for Smart Homeowners - Secure shopping online is a prudent practice for homeowners. Researching the store and its reviews is an important step in ensuring a secure online shopping experience. Taking these steps before making an online purchase can help ensure a secure ...
6 months ago Securityzap.com
MuleSoft unveils policy development kit for API gateway - The PDK allows developers of every skill level to quickly build policies to detect and protect sensitive data sent to APIs, the company said. Now a feature of Anypoint Flex Gateway, the PDK streamlines the creation of custom API security policies to ...
4 months ago Infoworld.com
EU Reaches Agreement on AI Act Amid Three-Day Negotiations - The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament. The landmark bill will regulate the use of AI ...
6 months ago Infosecurity-magazine.com
9 Best DDoS Protection Service Providers for 2024 - eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More. One of the most powerful defenses an organization can employ against distributed ...
6 months ago Esecurityplanet.com
Cisco Secure Access named Leader in Zero Trust Network Access - Zero Trust Network Access is a critical component to increase productivity and reduce risk in today's hyper-distributed environments. Cisco Secure Access provides a modern form of zero trust access that utilizes a new architecture to deliver a unique ...
3 months ago Feedpress.me
Adapting Security to Protect AI/ML Systems - As companies race to integrate AI and machine learning into every facet of their operations, they are also introducing new security and risk challenges. While some of the same risks that apply in traditional IT security continue to be relevant in ...
5 months ago Darkreading.com
Meta's Purple Llama wants to test safety risks in AI models - Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input. Take for example the older models that were used to determine whether a file was ...
6 months ago Malwarebytes.com
Singapore keeping its eye on data centers and data models as AI adoption grows - With the adoption of artificial intelligence fast accelerating, Singapore says it has taken steps to ensure data centers operating in the country are energy efficient and government data used to train models are adequately secured. Also: Every AI ...
5 months ago Zdnet.com
How Secure Cloud Development Replaces Virtual Desktop Infrastructures - The need to secure corporate IT environments is common to all functions of organizations, and software application development is one of them. Development environments have notoriously complex setups and often require significant maintenance because ...
3 months ago Feeds.dzone.com
A New Trick Uses AI to Jailbreak AI Models-Including GPT-4 - Large language models recently emerged as a powerful and transformative new kind of technology. Their potential became headline news as ordinary people were dazzled by the capabilities of OpenAI's ChatGPT, released just a year ago. In the months that ...
6 months ago Wired.com
Managing API Evolution with Version Control - In the rapidly evolving landscape of software systems in today's digital era, API version control has emerged as a critical strategy to ensure the robust evolution of systems. API version control serves as a vital mechanism to safeguard system ...
6 months ago Feeds.dzone.com
Cisco Secure Access Extends SSE With Mobile Zero Trust - Earlier this year, we introduced Cisco Secure Access, a security service edge solution that combines a secure web gateway, cloud access security broker, firewall-as-a-service, zero trust access and more, to help organizations address this challenge ...
6 months ago Feedpress.me
Akto Launches Proactive GenAI Security Testing Solution - With the increasing reliance on GenAI models and Language Learning Models like ChatGPT, the need for robust security measures have become paramount. Akto, a leading API Security company, is proud to announce the launch of its revolutionary GenAI ...
4 months ago Darkreading.com
Enterprises will need AI governance as large language models grow in number - With the number of large language models in the market expected to grow and branch out, businesses will need a governance framework to manage their generative artificial intelligence applications. This approach will encompass the use of paid and ...
6 months ago Zdnet.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)