OpenAI to use ChatGPT to curtail fake news and Deepfakes

The upcoming United States Presidential Elections in November 2024 have prompted Microsoft to take decisive action against the spread of misinformation and deepfakes.
Leveraging the power of its AI chatbot, ChatGPT, the tech giant aims to play a pivotal role in safeguarding the electoral process.
Microsoft officially announced its commitment to combating the potential misuse of AI intelligence, ensuring that cyber criminals employed by adversaries won't have an open field to disseminate fake news during the US 2024 polls.
In a post dated January 15, 2024, the parent company of OpenAI disclosed a strategic collaboration with the National Association of Secretaries of State.
The objective is clear - to counteract the proliferation of deepfake videos and disinformation leading up to the 2024 elections.
Microsoft has fine-tuned ChatGPT to address queries related to elections, presidential candidates, and poll results by directing users to CanIVote.org, a credible website offering comprehensive information on the voting framework.
The initiative begins with Microsoft targeting the online users of the Windows 11 operating system in the Joe Biden-led nation, channeling them towards the official election website.
To ensure the precision of information and prevent malpractice by adversaries, all web traffic interacting with the chatbot will be closely monitored.
This scrutiny extends to DALL-E 3, the latest version of OpenAI known for generating AI-powered images and often exploited by state-funded actors to create deepfake videos.
According to the official statement, every image produced by ChatGPT DALL-E will be stamped with a Coalition for Content Provenance and Authenticity digital credential.
This unique identifier acts as a barcode for each generated image, serving the dual purpose of complying with the Content Authenticity Initiative and Project Origin.
Noteworthy companies such as Adobe, X, Facebook, Google, and The New York Times are already actively participating in these initiatives aimed at combating copyright infringement.
Notably, Google DeepMind is also joining the efforts by experimenting with a watermarking AI tool called SynthID, following in the footsteps of Meta AI. This collective endeavor signifies a comprehensive approach across major tech players to uphold the integrity of information and combat the rising threat of deepfake content.


This Cyber News was published on www.cybersecurity-insiders.com. Publication date: Wed, 17 Jan 2024 16:28:03 +0000


Cyber News related to OpenAI to use ChatGPT to curtail fake news and Deepfakes

Sam Altman's Return As OpenAI CEO Is A Relief-and Lesson-For Us All - The sudden ousting of OpenAI CEO Sam Altman on Friday initially seemed to suggest one thing: he must have done something really, really bad. Possibly illegal. So when OpenAI's board of directors publicly announced that Altman was fired after "Failing ...
7 months ago Forbes.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
4 months ago Imperva.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
5 months ago Cysecurity.news
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
6 months ago Cyberdefensemagazine.com
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
7 months ago 404media.co
Meet the UC Berkeley professor tracking election deepfakes - Not in recent history has a technology come along with the potential to harm society more than deepfakes. The manipulative, insidious AI-generated content is already being weaponized in politics and will be pervasive in the upcoming U.S. Presidential ...
5 months ago Venturebeat.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
5 months ago Venturebeat.com
Microsoft Invests Billions in OpenAI – Innovator in Chatbot and GPT Technology - Microsoft has announced a $1 billion investment in OpenAI, the San Francisco-based artificial intelligence (AI) research and development firm. Founded by tech moguls Elon Musk and Sam Altman, OpenAI is a leader in AI technology, and the investment ...
1 year ago Securityweek.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
3 months ago Itsecurityguru.org
OpenAI's board might have been dysfunctional-but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it's no contest - The drama around OpenAI, its board, and Sam Altman has been a fascinating story that raises a number of ethical leadership issues. What are the responsibilities that OpenAI's board, Sam Altman, and Microsoft held during these quickly moving events? ...
7 months ago Fortune.com
OpenAI's Sora Generates Photorealistic Videos - OpenAI released on Feb. 15 an impressive new text-to-video model called Sora that can create photorealistic or cartoony moving images from natural language text prompts. Sora isn't available to the public yet; instead, OpenAI released Sora to red ...
4 months ago Techrepublic.com
OpenAI to use ChatGPT to curtail fake news and Deepfakes - The upcoming United States Presidential Elections in November 2024 have prompted Microsoft to take decisive action against the spread of misinformation and deepfakes. Leveraging the power of its AI chatbot, ChatGPT, the tech giant aims to play a ...
5 months ago Cybersecurity-insiders.com
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
6 months ago Bleepingcomputer.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
UK Scrutiny Of Microsoft Partnership With OpenAI - CMA seeks feedback about the relationship between Microsoft and OpenAI, and whether it has antitrust implications. Microsoft, it should be remembered, was firmly rebuked for its conduct by the CMA in October after the UK regulator reversed its ...
6 months ago Silicon.co.uk
OpenAI Reveals ChatGPT Is Being DDoS-ed - ChatGPT developer OpenAI has admitted the cause of intermittent outages across its flagship generative AI offering over the past day: distributed denial of service attacks. According to the developer's status page, ChatGPT and its API have been ...
7 months ago Infosecurity-magazine.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
OpenAI blocks state-sponsored hackers from using ChatGPT - OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated ...
4 months ago Bleepingcomputer.com
AI, Deepfakes and Digital ID: The New Frontier of Corporate Cybersecurity - iD. The emergence of deepfakes fired the starting pistol in a cybersecurity arms race. Deepfakes will intensify the already acute pressure placed on trust and communication in the public sphere. Because of this focus, what risks being missed is the ...
1 month ago Cyberdefensemagazine.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
5 months ago Venturebeat.com
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
6 months ago Securityweek.com
OpenAI's New GPT Store May Carry Data Security Risks - A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales. ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, ...
5 months ago Darkreading.com
Hangzhou's Cybersecurity Breakthrough: How ChatGPT Elevated Ransomware Resolution - The Chinese media reported on Thursday that local police have arrested a criminal gang from Hangzhou who are using ChatGPT for program optimization to carry out ransomware attacks for the purpose of extortion. An organization in the Shangcheng ...
6 months ago Cysecurity.news
OpenAI ousted CEO Sam Altman, but is reportedly reconsidering the move - Shortly before the one-year mark of ChatGPT being unveiled, on Friday, OpenAI revealed in a blog post that Sam Altman is departing his role as CEO and leaving his position on the board of directors due to a lack of confidence in his leadership ...
7 months ago Zdnet.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)