The upcoming United States Presidential Elections in November 2024 have prompted Microsoft to take decisive action against the spread of misinformation and deepfakes.
Leveraging the power of its AI chatbot, ChatGPT, the tech giant aims to play a pivotal role in safeguarding the electoral process.
Microsoft officially announced its commitment to combating the potential misuse of AI intelligence, ensuring that cyber criminals employed by adversaries won't have an open field to disseminate fake news during the US 2024 polls.
In a post dated January 15, 2024, the parent company of OpenAI disclosed a strategic collaboration with the National Association of Secretaries of State.
The objective is clear - to counteract the proliferation of deepfake videos and disinformation leading up to the 2024 elections.
Microsoft has fine-tuned ChatGPT to address queries related to elections, presidential candidates, and poll results by directing users to CanIVote.org, a credible website offering comprehensive information on the voting framework.
The initiative begins with Microsoft targeting the online users of the Windows 11 operating system in the Joe Biden-led nation, channeling them towards the official election website.
To ensure the precision of information and prevent malpractice by adversaries, all web traffic interacting with the chatbot will be closely monitored.
This scrutiny extends to DALL-E 3, the latest version of OpenAI known for generating AI-powered images and often exploited by state-funded actors to create deepfake videos.
According to the official statement, every image produced by ChatGPT DALL-E will be stamped with a Coalition for Content Provenance and Authenticity digital credential.
This unique identifier acts as a barcode for each generated image, serving the dual purpose of complying with the Content Authenticity Initiative and Project Origin.
Noteworthy companies such as Adobe, X, Facebook, Google, and The New York Times are already actively participating in these initiatives aimed at combating copyright infringement.
Notably, Google DeepMind is also joining the efforts by experimenting with a watermarking AI tool called SynthID, following in the footsteps of Meta AI. This collective endeavor signifies a comprehensive approach across major tech players to uphold the integrity of information and combat the rising threat of deepfake content.
This Cyber News was published on www.cybersecurity-insiders.com. Publication date: Wed, 17 Jan 2024 16:28:03 +0000