As innovation in artificial intelligence continues apace, 2024 will be a crucial time for organizations and governing bodies to establish security standards, protocols, and other guardrails to prevent AI from getting ahead of them, security experts warn.
Large language models, powered by sophisticated algorithms and massive data sets, demonstrate remarkable language understanding and humanlike conversational capabilities.
These models represent enormous potential for significant productivity and efficiency gains for organizations, but experts agree that the time has come for the industry as a whole to address the inherent security risks posed by their development and deployment.
Despite those dystopian fears, most security experts aren't that concerned about a doomsday scenario in which machines become smarter than humans and take over the world.
What is concerning is the fact that AI advancements and adoption are moving too quickly for the risks to be properly managed, researchers note.
On the contrary, the rate of risk assessment and implementing appropriate safeguards should match the rate at which LLMs are being trained and developed.
Generative AI Risks There are several widely recognized risks to generative AI that demand consideration and will only get worse as future generations of the technology get smarter.
None of them so far poses a science-fiction doomsday scenario in which AI conspires to destroy its creators.
Because LLMs require access to vast amounts of data to provide accurate and contextually relevant outputs, sensitive information can be inadvertently revealed or misused.
From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems.
One way has been to use the models to create sophisticated business email compromise and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success.
AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way.
An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true.
In this way, attackers can further weaponize AI to mount supply chain attacks.
The Way Forward Managing these risks will require measured and collective action before AI innovation outruns the industry's ability to control it, experts note.
Organizations also should take a measured approach to adopting AI - including AI-based security solutions - lest they introduce more risks into their environment, Netrix's Wilson cautions.
Securiti's Rinehart offers a two-tiered approach to phasing AI into an environment by deploying focused solutions and then putting guardrails in place immediately before exposing the organization to unnecessary risk.
Experts also recommend setting up security policies and procedures around AI before it's deployed rather than as an afterthought to mitigate risk.
They can even set up a dedicated AI risk officer or task force to oversee compliance.
Outside of the enterprise, the industry as a whole also must take steps to set up security standards and practices around AI that everyone developing and using the technology can adopt - something that will require collective action by both the public and private sector on a global scale, DarkTrace Federal's Fowler says.
This Cyber News was published on www.darkreading.com. Publication date: Thu, 28 Dec 2023 14:00:06 +0000