OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team.
Forest Blizzard [Russia]: Utilized ChatGPT to conduct research into satellite and radar technologies pertinent to military operations and to optimize its cyber operations with scripting enhancements.
Emerald Sleet : Leveraged ChatGPT for researching North Korea and generating spear-phishing content, alongside understanding vulnerabilities and troubleshooting web technologies.
Crimson Sandstorm [Iran]: Engaged with ChatGPT for social engineering assistance, error troubleshooting,.
Charcoal Typhoon [China]: Interacted with ChatGPT to assist in tooling development, scripting, comprehending cybersecurity tools, and generating social engineering content.
Salmon Typhoon [China]: Employed LLMs for exploratory inquiries on a wide range of topics, including sensitive information, high-profile individuals, and cybersecurity, to expand their intelligence-gathering tools and evaluate the potential of new technologies for information sourcing.
Generally, the threat actors used the large language models to enhance their strategic and operational capabilities, including reconnaissance, social engineering, evasion tactics, and generic information gathering.
None of the observed cases involve the use of LLMs for directly developing malware or complete custom exploitation tools.
Instead, actual coding assistance concerned lower-level tasks such as requesting evasion tips, scripting, turning antivirus off, and generally the optimization of technical operations.
In January, a report from the United Kingdom's National Cyber Security Centre predicted that by 2025 the operations of sophisticated advanced persistent threats will benefit from AI tools across the board, especially in developing evasive custom malware.
Last year according to OpenAI's and Microsoft's findings, there was an uplift in APT attack segments like phishing/social engineering, but the rest was rather exploratory.
OpenAI says it will continue to monitor and disrupt state-backed hackers using specialized monitoring tech, information from industry partners, and dedicated teams tasked with identifying suspicious usage patterns.
OpenAI rolls out imperfect fix for ChatGPT data leak flaw.
UK says AI will empower ransomware over the next two years.
Turla hackers backdoor NGOs with new TinyTurla-NG malware.
This Cyber News was published on www.bleepingcomputer.com. Publication date: Thu, 15 Feb 2024 16:00:21 +0000