Applications developed within open-source communities often face more significant security challenges because they are free and widely available, supported by volunteers, and because of other considerations.
Even if a major open-source AI project hasn't already been compromised, it's only a matter of time until it is.
Let's explore why open-source AI security is lacking and what security professionals can do to improve it.
First, it's essential to acknowledge that AI is not something different from software; it is software.
It is part of the operation of IT systems and thus part of the software supply chain.
Of equal note, software supply chain security is not purely about web applications, command-line tooling, or other things that are most often thought of when referring to software.
It protects every component and process as organizations develop, distribute, and deploy software.
Every stage of software development - from coding and building to production, deployment, and maintenance - is involved and needs to be secure.
The challenges within the AI supply chain mirror those of the broader software supply chain, with added complexity when integrating large language models or machine learning models into organizational frameworks.
Consider a scenario where a financial institution seeks to leverage AI models for loan risk assessment.
This application demands meticulous scrutiny of the AI model's software supply chain and training data origins to ensure compliance with regulatory standards, such as prohibiting protected categories in loan approval processes.
To illustrate, let's examine how a bank integrates AI models into its loan risk assessment procedures.
Thus, the bank must consider and assess the AI model's software and training data supply chain to prevent biases that could lead to legal or regulatory complications.
Recent research indicates an inverse relationship between the security posture of open-source AI software tools and their popularity.
Put simply, the more widely adopted an open-source AI tool or model, the greater the security vulnerabilities it may possess.
The prevalence of open-source AI models trained on potentially illegal or unethical data poses significant legal and regulatory risks for users.
Security specifications: Advocate for greater transparency and accountability within the open-source community, demanding essential security metadata such as Software Bill of Materials, SLSA, and SARIF. Open-source security tools: Collaborate with companies that offer support for security projects, such as Allstar, GUAC, and in-toto attestations, to bear some liability while still benefiting from open-source innovation.
CISOs and their security teams need information about the software in their organization's environments to ensure its security.
With this information, CISOs can make informed, risk-based decisions about the software components they integrate into their environments.
Relying on volunteer efforts for security without contribution or investment is unsustainable and ineffective.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Sat, 18 May 2024 08:43:05 +0000