Intel has disclosed a maximum severity vulnerability in some versions of its Intel Neural Compressor software for AI model compression.
The bug, designated as CVE-2024-22476, provides an unauthenticated attacker with a way to execute arbitrary code on Intel systems running affected versions of the software.
The vulnerability is the most serious among dozens of flaws the company disclosed in a set of 41 security advisories this week.
Improper Input Validation Intel identified CVE-2024-22476 as stemming from improper input validation, or a failure to properly sanitize user input.
The chip maker has given the vulnerability a maximum score of 10 on the CVSS scale because the flaw is remotely exploitable with low complexity and has a high impact on data confidentiality, integrity, and availability.
The vulnerability affects Intel Neural Compressor versions before 2.5.0.
Intel has recommended that organizations using the software upgrade to version 2.5.0 or later.
Intel's advisory indicated that the company learned of the vulnerability from an external security researcher or entity whom the company did not identify.
Intel Neural Compressor is an open source Python library that helps compress and optimize deep learning models for tasks such as computer vision, natural language processing, recommendation systems, and a variety of other use cases.
Techniques for compression include neural network pruning - or removing the least important parameters; reducing memory requirements via process call quantization; and distilling a larger model to a smaller one with similar performance.
The goal with AI model compression technology is to help enable the deployment of AI applications on diverse hardware devices, including those with limited or constrained computational power, such as mobile devices.
One Among Many CVE-2024-22476 is actually one of two vulnerabilities in Intel's Neural Compressor software that it disclosed - and for which it released a fix - this week.
Intel assessed the flaw at presenting only a moderate risk because, among other things, it requires an attacker to already have local, authenticated access to a vulnerable system to exploit it.
In addition to the Neural Compressor flaws, Intel also disclosed five high-severity privilege escalation vulnerabilities in its UEFI firmware for server products.
Intel's advisory listed all the vulnerabilities as input validation flaws, with severity scores ranging from 7.2 to 7.5 on the CVSS scale.
Emerging AI Vulnerabilities The Neural Compressor vulnerabilities are examples of what security analysts have recently described as the expanding - but often overlooked - attack surface that AI software and tools are creating at enterprise organizations.
A lot of the security concerns around AI software so far have centered on the risks in using large language models and LLM-enabled chatbots like ChatGPT. Over the past year, researchers have released numerous reports on the susceptibility of these tools to model manipulation, jailbreaking, and several other threats.
What has been somewhat less of a focus so far has been the risk to organizations from vulnerabilities in some of the core software components and infrastructure used in building and supporting AI products and platforms.
A recent study commissioned by the UK's Department for Science, Innovation and Technology identified numerous potential cyber-risks to AI technology at every life cycle state from the software design phase through development, deployment, and maintenance.
The risks include a failure to do adequate threat modeling and not accounting for secure authentication and authorization in the design phase to code vulnerabilities, insecure data handling, inadequate input validation, and a long list of other issues.
This Cyber News was published on www.darkreading.com. Publication date: Sat, 18 May 2024 08:05:25 +0000