Breakthroughs in large language models are driving an arms race between cybersecurity and social engineering scammers.
For businesses, generative AI is both a curse and an opportunity.
It's not just AI models themselves that cyber criminals are targeting.
In a time when fakery is the new normal, they're also using AI to create alarmingly convincing social engineering attacks or generate misinformation at scale.
While the potential of generative AI in assisting creative and analytical processes is without doubt, the risks are less clear.
Equipped with these technologies, cyber criminals can create highly convincing personas and extend their reach through social media, email and even live audio or video calls.
Admittedly, it's still early days for generative AI in social engineering, but there's little doubt that it will come to shape the entire cyber crime landscape in the years ahead. With that in mind, here are some of our top generative AI-driven cyber crime predictions for 2024.
Given the rise of more sophisticated models, which can better mimic emotional intelligence and create personalized content, it's highly probable that AI-created phishing content will become every bit as convincing, if not more so.
That's not even considering it can take hours to craft a convincing phishing email, whereas it only takes a few minutes using generative AI. Routine phishing emails will no longer be easily identifiable by spelling and grammar mistakes or other obvious cues.
Learn more about AI cybersecurity Custom open-source model training will advance cyber crime.
Most of the popular generative AI models are closed-source and have robust safety barriers built in.
Cyber crime syndicates are already developing their own custom models and selling them via the dark web.
Rather, it was a deepfake video in which the scammer used generative AI to create an avatar that convincingly impersonated the company's chief financial officer during a live conference call.
What seemed outlandish just a few years ago is now on its way to becoming the number-one attack vector for sophisticated and highly targeted social engineering attacks.
Face-swapping technology is now readily available, and like every other form of generative AI, it's advancing at a pace that's near impossible for lawmakers and infosec professionals to keep up with.
A more immediate concern, especially in the foreseeable future, is the ability of generative AI to mimic voices and writing styles.
Like almost any disruptive innovation, generative AI can be a force for good or bad. The only viable way for infosec professionals to keep up is to incorporate AI into their threat detection and mitigation processes.
Generative AI specifically can assist infosec teams in operations like malware analysis, phishing detection and prevention and threat simulation and training.
The most effective way to keep ahead of cyber criminals is to think like cyber criminals, hence the value of red-teaming and offensive security.
If you'd like to learn more about cybersecurity in the era of generative AI and how AI can enhance the abilities of your security teams, read IBM's in-depth guide.
This Cyber News was published on securityintelligence.com. Publication date: Thu, 09 May 2024 14:43:06 +0000