Unsanctioned AI, also known as shadow AI, poses even more challenges.
Shadow AI is just like every other stripe of shadow IT - unsanctioned technology that corporate employees deploy ad hoc and use in ways unknown to or hidden from an organization's central IT and risk management functions.
The impulse is understandable, but shadow AI - as with any sanctioned large language model AI project - presents specific cybersecurity and business risks, including the following.
Functional risks stem from an AI tool's ability to function properly.
A shadow AI tool could give bad advice to the business because it is suffering from model drift, was inadequately trained or is hallucinating - i.e., generating false information.
If the AI platform were to suffer a cyberattack, the data could also fall into cybercriminals' hands.
Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines.
Lawsuits might also materialize if the shadow tool provides customers with bad advice generated by model drift or poisoned training data or if the model uses copyright-protected data for self-training.
Finally, shadow AI usage opens the door to wasteful or duplicative spending among shadow projects or between shadow and sanctioned ones.
In some cases, shadow AI users may also waste money by failing to take advantage of negotiated rates for similar, sanctioned technology.
Consider, too, the opportunity cost stemming from shadow projects that ultimately fail because they do not follow company policies or good practices - that time and money could have been put toward other projects.
For shadow projects that do get brought into the portfolio and cease to be shadow, expect transition costs.
Staff and employees who used the shadow tool will likely have to be retrained to understand the tool set in its new context with new parameters.
IT and security teams have few methods at their disposal to preemptively find and rein in shadow AI, even when they have authority to do so.
The CEO has to lend the highest level of support to the process; the CFO needs to sniff out spending on AI applications, platforms and tools that is not visible to IT. The goal isn't to enlist IT and security teams in crackdowns on the unsanctioned use of AI or even necessarily to force shadow AI users onto preferred technical platforms.
Sensitive data, on the other hand, might be restricted to on-premises AI deployments or secure, enterprise-grade apps that are trained to abide by internal data security policies.
An AI acceptable use policy can clearly communicate that improper AI usage can hurt the organization, as well as how to align AI usage with data security policies and other risk mitigation strategies.
If and when shadow AI surfaces, decision-makers can compare the tools' use against the policy to quickly identify risk exposure and necessary next steps.
Security and risk leaders should not expect shadow AI to go away any time soon - especially given the still-expanding set of options available for SaaS tools and for on-premises development.
As new-generation LLMs become more numerous and diverse - both in costs and resource requirements - there is every reason to expect shadow AI projects will multiply as well.
This Cyber News was published on www.techtarget.com. Publication date: Mon, 05 Feb 2024 18:43:03 +0000