For years, phishing was just a numbers game: A malicious actor would slap together an extremely generic email and fire it out to thousands of recipients in the hope that a few might take the bait.
Common among these new techniques was a shift towards a more balanced approach to phishing, one emphasizing both quantity and quality.
This shift gave rise to the advanced phishing techniques we know all too well today, like spear-phishing and business email compromise.
Unlike the phishing tactics of yesteryear, these techniques make use of much more carefully crafted, convincing messaging tailored to deceive specific individuals, groups, or organizations.
This shift in phishing philosophies has also led to a precipitous decline in the use of malicious payloads in phishing emails - presumably to avoid detection from the more capable email security solutions of today.
It appears this inherent constraint on scale is now a thing of the past, with the emergence of generative AI effectively flipping the funnel on phishing speed and scale.
Interestingly, researchers have been aware of GenAI's potential for supercharging phishing campaigns since 2021, with some even publishing research demonstrating the ability of OpenAI's ChatGPT to generate significantly more sophisticated and effective phishing emails in a fraction of the time.
Now, over a year since GenAI tools entered the mainstream, they've managed to completely upend the traditional trade-off between quality and quantity that once held phishing content creation in check.
The security community has witnessed the emergence of GenAI tools explicitly designed for nefarious purposes, such as FraudGPT and WormGPT. These tools empower threat actors by automating the development of highly personalized spear-phishing and BEC attacks that are not only grammatically correct, but also capable of adapting the text to various languages, contexts, and communication styles.
Such customization potential could enable bad actors to automate even more aspects of the phishing process, even while operating within the tool's prescribed safeguards.
A significant majority of organizations appear ill-prepared to counter these emerging phishing threats.
Our analysis found over 8 million phishing attempts successfully evaded native defenses in 2022 alone.
It's becoming increasingly apparent that the only reliable way to combat this rising tide of advanced phishing threats is to fight fire with fire - that is, to leverage AI and machine learning-enabled email security solutions as defensive measures against this rapidly changing, increasingly-challenging threat landscape.
Employees play a critical role in scrutinizing flagged emails, engaging with email chatbots for context, and contributing their insights to catch highly sophisticated emails that might circumvent security.
In addition to deploying the right AI security tools, every CISO should prioritize security awareness training and phishing simulation testing.
As phishing tactics evolve, employees may become their company's last line of defense against novel attacks.
To build broader employee knowledge of trending phishing tactics, it's crucial to develop and implement ongoing training and testing programs.
As a first step, companies should use phishing simulation testing to establish a performance baseline for each employee.
The landscape of phishing attacks has evolved significantly in recent years, with threat actors employing more advanced techniques that target specific individuals, groups, or organizations with the scale and sophistication that many legacy email solutions cannot protect against.
By staying informed and prepared, organizations can significantly reduce their vulnerability to these advanced phishing techniques and protect their valuable assets from cybercriminals.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Mon, 15 Jan 2024 06:13:04 +0000