Google, Meta, OpenAI, and X are among 20 technology companies that have pledged to weed out fraudulent content generated by artificial intelligence, as part of efforts to safeguard global elections expected to take place this year.
Also: The ethics of generative AI: How we can harness this powerful technology.
It also includes efforts to raise public awareness of how to protect themselves from being manipulated by such content, according to a joint statement released by the tech accord signatories, which also include TikTok, Amazon, IBM, Anthropic, and Microsoft.
With the accord, the 20 organizations promise to observe eight mission statements, including seeking to detect and prevent the distribution of deceptive AI election content and providing transparency to the public on how it addresses such content.
They will work together to develop and implement the tools to identify and curb the spread of the content as well as track the origins of such content.
These efforts can include developing classifiers or provenance methods and standards, such as watermarking or signed metadata, and attaching machine-readable information to AI-generated content.
The eight commitments will apply where relevant to the services each company provides.
Also: Elections 2024: How AI will fool voters if we don't do something now.
Also: We're not ready for the impact of generative AI on elections.
The accord aims to set expectations for how the signatories will manage risks arising from deceptive AI election content created via their public platforms or open foundational models, or distributed on their social and publishing platforms.
These are in line with the signatories' own policies and practices.
Models or demos intended for research purposes or primarily for enterprise use are not covered under the accord.
The signatories added that AI can be leveraged to help defenders counter bad actors and enable swifter detection of deceptive campaigns.
AI tools also can significantly lower the overall cost of defense, allowing smaller organizations to implement robust protections.
Associated risks from AI-powered misinformation on societal cohesion will dominate the landscape this year, according to the Global Risks Report 2024 released last month by the World Economic Forum.
The report lists misinformation and disinformation as the leading global risk over the next two years, warning that its widespread use as well as the tools to disseminate it could undermine the legitimacy of new incoming governments.
This Cyber News was published on www.zdnet.com. Publication date: Mon, 19 Feb 2024 17:13:04 +0000