Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters.
Twelve other companies - including Elon Musk's X - are also signing on to the accord.
The companies aren't committing to ban or remove deepfakes.
Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms.
Several political leaders from Europe and the U.S. also joined Friday's announcement.
The agreement at the German city's annual security meeting comes as more than 50 countries are due to hold national elections in 2024.
Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked U.S. President Joe Biden's voice tried to discourage people from voting in New Hampshire's primary election last month.
Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election.
Fact-checkers scrambled to identify them as false as they spread across social media.
Politicians also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they're putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they're seeing is real.
Most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the U.S., where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not.
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney.
The San Francisco-based startup didn't immediately respond to a request for comment Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement.
This Cyber News was published on www.securityweek.com. Publication date: Sun, 18 Feb 2024 13:43:05 +0000