In a report issued on Friday, 20 leading technology companies pledged to take proactive steps to prevent deceptive uses of artificial intelligence from interfering with global elections, including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe.
The companies are also committed to educating voters about the use of artificial intelligence and providing transparency in elections around the world.
It was the head of the Munich Security Conference, which announced the accord, that lauded the agreement as a critical step towards improving election integrity, increasing social resilience, and creating trustworthy technology practices that would help advance the advancement of election integrity.
It is expected that in 2024, over 4 billion people will be eligible to cast ballots in over 40 different countries.
A growing number of experts are saying that easy-to-use generative AI tools could potentially be used by bad actors in those campaigns to sway votes and influence those elections.
From simple text prompts, users can generate images, videos, and audio using tools that use generative artificial intelligence.
It can be said that some of these services do not have the necessary security measures in place to prevent users from creating content that suggests politicians or celebrities say things they have never said or do things they have never done.
It is important to note that it does not call for an outright ban on such content in its entirety.
It should be noted that while the agreement is intended to show unity among platforms with billions of users, it mostly outlines efforts that are already being implemented, such as those designed to identify and label artificial intelligence-generated content already in the pipeline.
Especially in the upcoming election year, which is going to see millions of people head to the polls in countries all around the world, there is growing concern about how artificial intelligence software could mislead voters and maliciously misrepresent candidates.
AI appears to have already impersonated President Biden in New Hampshire's January primary attempting to discourage Democrats from voting in the primary as well as purportedly showing a leading candidate claiming to have rigged the election in Slovakia last September by using obvious AI-generated audio.
The agreement, endorsed by a consortium of 20 corporations, encompasses entities involved in the creation and dissemination of AI-generated content, such as OpenAI, Anthropic, and Adobe, among others.
Notably, Eleven Labs, whose voice replication technology is suspected to have been utilized in fabricating the false Biden audio, is among the signatories.
Social media platforms including Meta, TikTok, and X, formerly known as Twitter, have also joined the accord.
Nick Clegg, Meta's President of Global Affairs, emphasized the imperative for collective action within the industry, citing the pervasive threat posed by AI. The accord delineates a comprehensive set of principles aimed at combating deceptive election-related content, advocating for transparent disclosure of origins and heightened public awareness.
Specifically addressing AI-generated audio, video, and imagery, the accord targets content falsifying the appearance, voice, or conduct of political figures, as well as disseminating misinformation about electoral processes.
Acknowledged as a pivotal stride in fortifying digital communities against detrimental AI content, the accord underscores a collaborative effort complementing individual corporate initiatives.
This Cyber News was published on www.cysecurity.news. Publication date: Sun, 18 Feb 2024 18:43:05 +0000