If you believe that the 2020 Presidential election in the United States represented the worst kind of campaign replete with lies, misstated facts and disinformation, I have some news for you.
The rapid evolution of artificial intelligence and analytics engines will put campaign-year disinformation into hyperspeed in terms of false content creation, dissemination and impact.
To prepare ourselves as a society to sift through falsehoods, deal with them appropriately and arrive at the truth, we need to understand how disinformation works in the age of AI. This article describes the four steps of an AI-driven disinformation campaign and how to get ahead of them so that security teams can be more prepared to deal with - and seek the truth behind - advancing tactics of malicious actors.
AI, particularly generative AI, helps threat actors to make more content, faster.
They can make content that appears far more realistic.
The magic of AI here is that they can rapidly create highly credible multimedia, multilingual, coordinated, rapidly refreshed, and monitored content.
Another power of AI in content creation is that it can eliminate content reuse and other markers of fake content generated by humans.
Targeted groups and individuals have traditionally been able to filter out the fake content, assuming they were on their guard for it.
With AI behind the scenes, the reuse and markers are gone, and threat actors can better weaponize all the new, real-looking content and better weaponize it.
AI not only enables them to inject data into numerous forms of new content, but it also makes it happen at scales and speeds never before contemplated - beyond anything humans could ever do.
Threat actors can instruct AI to sound like different types of Americans, across all segments of the population.
AI can easily create content to reflect the attitudes, opinions and vocabulary of a midwestern farmer if instructed to do so.
It can then leverage the same data to create realistic content that would likely come from someone in Texas.
Amplification is, essentially, the act of getting as many people as possible to see your content on the different social media platforms.
Leveraging analytics and AI, threat actors have the infrastructure they need to create an army of fake personas and characters that look real at the start but become progressively 'more real' over time as AI fills in their profiles with regular, credible content and back stories that are non-controversial.
Posting, boosting and interacting with the previously generated content, they can persuade anyone who has been baited by it.
Actualization in the context of AI-driven social campaigns involves gathering feedback from prior and ongoing efforts to further optimize content creation and amplification.
Again, applying analytics and AI to the data gathered from their efforts, threat actors refine their content to make it more credible and more targeted.
Creative minds that become experts at prompting AI engines with ever more targeted questions about social campaigns and the data they produce can devise endless scenarios, and AI can inform them how best to approach each scenario strategically and tactically.
My hope is that this mini-series on AI and disinformation will provide a collective picture on what security teams across industries need to prepare for, and strategically prioritize, this year as major events like the 2024 election cycle come into full swing.
This Cyber News was published on www.securityweek.com. Publication date: Tue, 19 Mar 2024 12:43:08 +0000