Recent events, including an artificial intelligence-generated deepfake robocall impersonating President Biden urging New Hampshire voters to abstain from the primary, serve as a stark reminder that malicious actors increasingly view modern generative AI platforms as a potent weapon for targeting US elections.
Platforms like ChatGPT, Google's Gemini, or any number of purpose-built Dark Web large language models could play a role in disrupting the democratic process, with attacks encompassing mass influence campaigns, automated trolling, and the proliferation of deepfake content.
FBI Director Christopher Wray recently voiced concerns about ongoing information warfare using deepfakes that could sow disinformation during the upcoming presidential campaign, as state-backed actors attempt to sway geopolitical balances.
That's a familiar tactic from the Cambridge Analytica scandal, where the company amassed psychological profile data on 230 million US voters, in order to serve up highly tailored messaging via Facebook to individuals in an attempt to influence their beliefs - and votes.
The mix of social media and readily available deepfake tech could be a doomsday weapon for polarization of US citizens in an already deeply divided country, he adds.
The platforms that threat actors use to sow division will likely be of little help: He adds that the social media platform X, formerly known as Twitter, has gutted its quality assurance on content.
AI Amplifies Existing Phishing TTPs GenAI is already being used to craft more believable, targeted phishing campaigns at scale - but in the context of election security that phenomenon is event more concerning, according to Scott Small, director of cyber threat intelligence at Tidal Cyber.
Small says AI adoption also lowers the barrier to entry for launching such attacks, a factor that is likely to increase the volume of campaigns this year that try to infiltrate campaigns or take over candidate accounts for impersonation purposes, among other potentials.
Defending Against AI Election Threats To defend against these threats, election officials and campaigns must be aware of GenAI-powered risks and how to defend against them.
They also must make sure volunteers and workers are trained on AI-powered threats like enhanced social engineering, the threat actors behind them and how to respond to suspicious activity.
To that end, staff should participate in social engineering and deepfake video training that includes information about all forms and attack vectors, including electronic, in-person and telephone-based attempts.
Campaign and election volunteers must be trained on how to safely provide information online and to outside entities, including social media posts, and use caution when doing so.
O'Reilly says long term, regulation that includes watermarking for audio and video deepfakes will be instrumental, noting the Federal government is working with the owners of LLMs to put protections into place.
This Cyber News was published on www.darkreading.com. Publication date: Fri, 09 Feb 2024 20:55:44 +0000