Intel Discloses Max Severity Bug in Its AI Model Compression Software

Intel has disclosed a maximum severity vulnerability in some versions of its Intel Neural Compressor software for AI model compression.
The bug, designated as CVE-2024-22476, provides an unauthenticated attacker with a way to execute arbitrary code on Intel systems running affected versions of the software.
The vulnerability is the most serious among dozens of flaws the company disclosed in a set of 41 security advisories this week.
Improper Input Validation Intel identified CVE-2024-22476 as stemming from improper input validation, or a failure to properly sanitize user input.
The chip maker has given the vulnerability a maximum score of 10 on the CVSS scale because the flaw is remotely exploitable with low complexity and has a high impact on data confidentiality, integrity, and availability.
The vulnerability affects Intel Neural Compressor versions before 2.5.0.
Intel has recommended that organizations using the software upgrade to version 2.5.0 or later.
Intel's advisory indicated that the company learned of the vulnerability from an external security researcher or entity whom the company did not identify.
Intel Neural Compressor is an open source Python library that helps compress and optimize deep learning models for tasks such as computer vision, natural language processing, recommendation systems, and a variety of other use cases.
Techniques for compression include neural network pruning - or removing the least important parameters; reducing memory requirements via process call quantization; and distilling a larger model to a smaller one with similar performance.
The goal with AI model compression technology is to help enable the deployment of AI applications on diverse hardware devices, including those with limited or constrained computational power, such as mobile devices.
One Among Many CVE-2024-22476 is actually one of two vulnerabilities in Intel's Neural Compressor software that it disclosed - and for which it released a fix - this week.
Intel assessed the flaw at presenting only a moderate risk because, among other things, it requires an attacker to already have local, authenticated access to a vulnerable system to exploit it.
In addition to the Neural Compressor flaws, Intel also disclosed five high-severity privilege escalation vulnerabilities in its UEFI firmware for server products.
Intel's advisory listed all the vulnerabilities as input validation flaws, with severity scores ranging from 7.2 to 7.5 on the CVSS scale.
Emerging AI Vulnerabilities The Neural Compressor vulnerabilities are examples of what security analysts have recently described as the expanding - but often overlooked - attack surface that AI software and tools are creating at enterprise organizations.
A lot of the security concerns around AI software so far have centered on the risks in using large language models and LLM-enabled chatbots like ChatGPT. Over the past year, researchers have released numerous reports on the susceptibility of these tools to model manipulation, jailbreaking, and several other threats.
What has been somewhat less of a focus so far has been the risk to organizations from vulnerabilities in some of the core software components and infrastructure used in building and supporting AI products and platforms.
A recent study commissioned by the UK's Department for Science, Innovation and Technology identified numerous potential cyber-risks to AI technology at every life cycle state from the software design phase through development, deployment, and maintenance.
The risks include a failure to do adequate threat modeling and not accounting for secure authentication and authorization in the design phase to code vulnerabilities, insecure data handling, inadequate input validation, and a long list of other issues.


This Cyber News was published on www.darkreading.com. Publication date: Sat, 18 May 2024 08:05:25 +0000


Cyber News related to Intel Discloses Max Severity Bug in Its AI Model Compression Software

Intel Discloses Max Severity Bug in Its AI Model Compression Software - Intel has disclosed a maximum severity vulnerability in some versions of its Intel Neural Compressor software for AI model compression. The bug, designated as CVE-2024-22476, provides an unauthenticated attacker with a way to execute arbitrary code ...
6 months ago Darkreading.com
Intel out-of-band patch addresses privilege escalation flaw The Register - Intel on Tuesday issued an out-of-band security update to address a privilege escalation vulnerability in recent server and personal computer chips. The flaw, designated INTEL-SA-00950 and given a CVSS 3.0 score of 8.8 out of 10, affects Intel ...
11 months ago Theregister.com
Best of 2023: Diamond Model of Intrusion Analysis: A Quick Guide - Any intrusion into a network calls for a thorough analysis to give security teams cyber intelligence about different threats and to help thwart similar future attacks. Effective incident analysis has long been held back by uncertainty and high false ...
10 months ago Securityboulevard.com
Intel knew AVX chips were insecure and did nothing - Intel has been sued by a handful of PC buyers who claim the x86 goliath failed to act when informed five years ago about faulty chip instructions that allowed the recent Downfall vulnerability, and during that period sold billions of insecure chips. ...
11 months ago Theregister.com
How to detect poisoned data in machine learning datasets - Almost anyone can poison a machine learning dataset to alter its behavior and output substantially and permanently. With careful, proactive detection efforts, organizations could retain weeks, months or even years of work they would otherwise use to ...
9 months ago Venturebeat.com
Establishing Reward Criteria for Reporting Bugs in AI Products - At Google, we maintain a Vulnerability Reward Program to honor cutting-edge external contributions addressing issues in Google-owned and Alphabet-subsidiary Web properties. To keep up with rapid advances in AI technologies and ensure we're prepared ...
11 months ago Darkreading.com
CVE-2022-37327 - Improper input validation in BIOS firmware for Intel(R) NUC, Intel(R) NUC Performance Kit, Intel(R) NUC Performance Mini PC, Intel(R) NUC 8 Compute Element, Intel(R) NUC Pro Kit, Intel(R) NUC Pro Board, Intel(R) NUC 11 Compute Element, Intel(R) NUC ...
1 year ago
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
1 month ago Securelist.com
Israel $3.2bn Grant For Intel's $25 Billion Chip Factory - Intel to make its largest ever single investment in Israel, with a $25 billion chip-making factory in the south of the country. Intel and the Israeli government have confirmed plans to construct a $25 billion chip-making factory in Southern Israel. ...
10 months ago Silicon.co.uk
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
9 months ago Securityboulevard.com
What Is Software Piracy? - Software piracy has become a worldwide issue, with China, the United States and India being the top three offenders. In 2022, 6.2% of people worldwide visited software piracy websites. Software piracy doesn't require a hacker or skilled coder. Any ...
11 months ago Pandasecurity.com
Chipmaker Patch Tuesday: Intel, AMD Address New Microarchitectural Vulnerabilities - Chipmakers Intel and AMD have published 10 new security advisories this Patch Tuesday to inform customers about vulnerabilities impacting their products. Intel published eight new advisories, including two that describe high-severity vulnerabilities. ...
8 months ago Securityweek.com
Intel Spins Off Enterprise Generative AI Deployment Firm Articul8 - Intel and the global investment firm DigitalBridge Group have formed an independent generative AI software stack company, Articul8 AI, Inc.; Intel announced the new company on Jan. 3. Articul8 will work with Intel and provide solutions for ...
10 months ago Techrepublic.com
Intel Spins Out AI Firm Articul8 - AI software developed at Intel is being spun off into independent firm Articul8 AI, with investment firm backing. Intel had been investing heavily into the AI field as it sought to take the fight to AI chip market leader Nvidia, amidst a boom in the ...
10 months ago Silicon.co.uk
Zoom Mobile & Desktop App Flaw Let Attackers Escalate Privileges - The popular video conferencing software Zoom has security issues with its desktop and mobile apps that could allow for privilege escalation. An attacker may be able to obtain elevated privileges within the application or the operating system by ...
11 months ago Cybersecuritynews.com
Patch Now: Critical Atlassian Bugs Endanger Enterprise Apps - It's time to patch again: Four critical security vulnerabilities in Atlassian software open the door to remote code execution and subsequent lateral movement within enterprise environments. They are just the latest bugs to surface of late in the ...
11 months ago Darkreading.com
CVE-2017-5682 - Intel PSET Application Install wrapper of Intel Parallel Studio XE, Intel System Studio, Intel VTune Amplifier, Intel Inspector, Intel Advisor, Intel MPI Library, Intel Trace Analyzer and Collector, Intel Integrated Performance Primitives, ...
5 years ago
CVE-2019-6332 - A potential security vulnerability has been identified with certain HP InkJet printers. The vulnerability could be exploited to allow cross-site scripting (XSS). Affected products and versions include: HP DeskJet 2600 All-in-One Printer series model ...
4 years ago
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
11 months ago Feedpress.me
Top LLM vulnerabilities and how to mitigate the associated risk - As large language models become more prevalent, a comprehensive understanding of the LLM threat landscape remains elusive. While the AI threat landscape changes every day, there are a handful of LLM vulnerabilities that we know pose significant risk ...
10 months ago Helpnetsecurity.com
The Crucial Need for a Secure Software Development Lifecycle in Today's Digital Landscape - In today's increasingly digital world, software is the backbone of business operations, from customer-facing applications to internal processes. The rapid growth of software development has also made organizations more vulnerable to security threats. ...
10 months ago Cyberdefensemagazine.com
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
10 months ago Cysecurity.news
Zoom stomps critical privilege escalation bug, 6 other flaws The Register - Review and manage your consent Here's an overview of our use of cookies, similar technologies and how to manage them. Video conferencing giant Zoom today opened up about a fresh batch of security vulnerabilities affecting its products, including a ...
9 months ago Go.theregister.com
CISA adds Check Point Quantum Security Gateways and Linux Kernel flaws to its Known Exploited Vulnerabilities catalog - CISA adds Apache Flink flaw to its Known Exploited Vulnerabilities catalog. CISA adds D-Link DIR router flaws to its Known Exploited Vulnerabilities catalog. CISA adds Google Chrome zero-days to its Known Exploited Vulnerabilities catalog. CISA adds ...
5 months ago Securityaffairs.com
JFrog, AWS team up for machine learning in the cloud - Software supply chain provider JFrog is integrating with the Amazon SageMaker cloud-based machine learning platform to incorporate machine learning models into the software development lifecycle. The JFrog platform integration with Amazon SageMaker, ...
10 months ago Infoworld.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)