In their 2024 cybersecurity outlook, WatchGuard researchers forecast headline-stealing hacks involving LLMs, AI-based voice chatbots, modern VR/MR headsets, and more in the coming year.
Companies and individuals are experimenting with LLMs to increase operational efficiency.
Threat actors are learning how to exploit LLMs for their own malicious purposes as well.
During 2024, the WatchGuard Threat Lab predicts that a smart prompt engineer whether a criminal attacker or researcher will crack the code and manipulate an LLM into leaking private data.
With approximately 3.4 million open cybersecurity jobs, and fierce competition for the talent that is available, more small- to midsized- companies will turn to trusted managed service and security service providers, known as MSPs and MSSPs, to protect them in 2024.
To accommodate growing demand and scarce staffing resources, MSPs and MSSPs will double down on unified security platforms with heavy automation using AI and ML. Cybercriminals can already buy tools on the underground that send spam email, automatically craft convincing texts, and scrape the internet and social media for a particular target's information and connections, but a lot of these tools are still manual and require attackers to target one user or group at a time.
Well-formatted procedural tasks like these are perfect for automation via artificial intelligence and machine learning - making it likely that AI-powered tools will emerge as best sellers on the dark web in 2024.
While Voice over Internet Protocol and automation technology make it easy to mass dial thousands of numbers, once a potential victim has been baited onto a call, it still takes a human scammer to reel them in.
WatchGuard predicts that the combination of convincing deepfake audio and LLMs capable of carrying on conversations with unsuspecting victims will greatly increase the scale and volume of vishing calls.
What's more, they may not even require a human threat actor's participation.
Virtual and mixed reality headsets are finally beginning to gain mass appeal.
Wherever new and useful technologies emerge, criminal and malicious hackers follow.
In 2024, Threat Lab researchers forecast that either a researcher or malicious hacker will find a technique to gather some of the sensor data from VR/MR headsets to recreate the environment users are playing in.
While quick response codes - which provide a convenient way to follow a link with a device such as a mobile phone - have been around for decades, mainstream usage has exploded in recent years.
Threat Lab analysts expect to see a major, headline-stealing hack in 2024 caused by an employee following a QR code to a malicious destination.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Mon, 04 Dec 2023 04:43:04 +0000