Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence and cybercrime has become a potent concern.
Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models to bolster their cyber operations.
The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare.
According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton's 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies.
Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection.
This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas.
The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms.
Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally.
While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape.
Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks.
The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities.
Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations.
This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities.
In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes.
The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data.
Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats.
The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve.
The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats.
By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.


This Cyber News was published on www.cysecurity.news. Publication date: Sat, 17 Feb 2024 18:43:05 +0000


Cyber News related to Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

Sam Altman's Return As OpenAI CEO Is A Relief-and Lesson-For Us All - The sudden ousting of OpenAI CEO Sam Altman on Friday initially seemed to suggest one thing: he must have done something really, really bad. Possibly illegal. So when OpenAI's board of directors publicly announced that Altman was fired after "Failing ...
1 year ago Forbes.com
XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT - With its widespread use among businesses and individual users, ChatGPT is a prime target for attackers looking to access sensitive information. In this blog post, I'll walk you through my discovery of two cross-site scripting vulnerabilities in ...
10 months ago Imperva.com
Locking Down ChatGPT: A User's Guide to Strengthening Account Security - OpenAI officials said that the user who reported his ChatGPT history was a victim of a compromised ChatGPT account, which resulted in the unauthorized logins. OpenAI has confirmed that the unauthorized logins originate from Sri Lanka, according to an ...
10 months ago Cysecurity.news
Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. ...
1 year ago 404media.co
Microsoft Invests Billions in OpenAI – Innovator in Chatbot and GPT Technology - Microsoft has announced a $1 billion investment in OpenAI, the San Francisco-based artificial intelligence (AI) research and development firm. Founded by tech moguls Elon Musk and Sam Altman, OpenAI is a leader in AI technology, and the investment ...
1 year ago Securityweek.com
UK Scrutiny Of Microsoft Partnership With OpenAI - CMA seeks feedback about the relationship between Microsoft and OpenAI, and whether it has antitrust implications. Microsoft, it should be remembered, was firmly rebuked for its conduct by the CMA in October after the UK regulator reversed its ...
1 year ago Silicon.co.uk
How Are Security Professionals Managing the Good, The Bad and The Ugly of ChatGPT? - ChatGPT has emerged as a shining light in this regard. Already we're seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. The bad: The risks surrounding ChatGPT. Of course, ...
1 year ago Cyberdefensemagazine.com
How enterprises are using gen AI to protect against ChatGPT leaks - ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ...
11 months ago Venturebeat.com
OpenAI's board might have been dysfunctional-but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it's no contest - The drama around OpenAI, its board, and Sam Altman has been a fascinating story that raises a number of ethical leadership issues. What are the responsibilities that OpenAI's board, Sam Altman, and Microsoft held during these quickly moving events? ...
1 year ago Fortune.com
ChatGPT Clone Apps Collecting Personal Data on iOS, Play Store - On Android devices, one of the apps analyzed by researchers has more than 100,000 downloads, tracks, and shares location data with ByteDance and Amazon, etc. ChatGPT, the AI software, has already taken the Internet by storm, and that is why ...
1 year ago Hackread.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
1 year ago Cybersecuritynews.com
ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information - API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and ...
9 months ago Itsecurityguru.org
OpenAI rolls out imperfect fix for ChatGPT data leak flaw - OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under ...
1 year ago Bleepingcomputer.com
OpenAI blocks state-sponsored hackers from using ChatGPT - OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated ...
10 months ago Bleepingcomputer.com
OpenAI Reveals ChatGPT Is Being DDoS-ed - ChatGPT developer OpenAI has admitted the cause of intermittent outages across its flagship generative AI offering over the past day: distributed denial of service attacks. According to the developer's status page, ChatGPT and its API have been ...
1 year ago Infosecurity-magazine.com
OpenAI's Sora Generates Photorealistic Videos - OpenAI released on Feb. 15 an impressive new text-to-video model called Sora that can create photorealistic or cartoony moving images from natural language text prompts. Sora isn't available to the public yet; instead, OpenAI released Sora to red ...
10 months ago Techrepublic.com
ChatGPT Maker OpenAI Raises $6.6bn In Funding | Silicon UK - Last week when OpenAI’s ‘for profit’ restructuring move was revealed, three senior executives abruptly announced they were departing, including Chief Technology Officer Mira Murati, VP Research Barret Zoph, and Chief Research ...
2 months ago Silicon.co.uk
Nadella Says Microsoft 'Comfortable' With OpenAI Governance - Microsoft chief Nadella says he is 'comfortable' with OpenAI's non-profit governance structure, plays down competition issues. Microsoft secured a non-voting board observer role at OpenAI following Altman's firing and return, but Nadella said ...
11 months ago Silicon.co.uk
Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher - A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of ...
11 months ago Securityweek.com
OpenAI's New GPT Store May Carry Data Security Risks - A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales. ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, ...
11 months ago Darkreading.com
Are you sure you want to share that with ChatGPT? How Metomic helps stop data leaks - Open AI's ChatGPT is one of the most powerful tools to come along in a lifetime, set to revolutionize the way many of us work. Workers aren't content to wait until organizations work this question out, however: Many are already using ChatGPT and ...
10 months ago Venturebeat.com
Foreign states already using ChatGPT maliciously, UK IT leaders believe - Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That's according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of ...
1 year ago Csoonline.com
Latest Information Security and Hacking Incidents - OpenAI has addressed significant security flaws in its state-of-the-art language model, ChatGPT, which has become widely used, in recent improvements. Although the business concedes that there is a defect that could pose major hazards, it reassures ...
11 months ago Cysecurity.news
The Emergence of AI In the Enterprise: Know the Security Risks - As is often the case with any new, emerging technology, using AI comes with security risks, and it's essential to understand them and impose the proper guardrails around them to protect company, customer, and employee data. There are real, tangible ...
1 year ago Cyberdefensemagazine.com
OpenAI Launches Security Committee Amid Ongoing Criticism - The new committee comes in the wake of two key members of the Superalignment team - OpenAI co-founder Ilya Sutskever and AI researcher Jan Leike - left the company. The shutting down of the superalignment team and the departure of Sutskever and Leike ...
6 months ago Securityboulevard.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)