“The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another,” said the blog post, written by Steven Masada, assistant general counsel of Microsoft’s Digital Crimes Unit. Four foreign and two U.S. developers unlawfully accessed generative AI services, reconfigured them to allow the creation of harmful content such as celebrity deepfakes and then resold access to the tools, Microsoft said Thursday in a legal filing. Users created “non-consensual intimate images of celebrities and other sexually explicit content” with the modified AI tools, including Microsoft’s Azure OpenAI services, the tech giant said in a blog post about its amended civil litigation complaint. The developers of the malicious AI tools are part of a “global cybercrime network” that Microsoft tracks as Storm-2139, the blog post said. The four foreign developers, the company said, are Arian Yadegarnia, aka “Fiz,” of Iran; Alan Krysiak, aka “Drago,” of the United Kingdom; Ricky Yuen, aka “cg-dot,” of Hong Kong; and Phát Phùng Tấn, aka “Asakuri,” of Vietnam. As chatter about the lawsuit increased, participants in the group’s communications channels also doxed Microsoft lawyers, “posting their names, personal information, and in some instances photographs,” the company said. The six individuals mentioned in the blog post are among 10 “John Does” listed in the original complaint, Microsoft said. After Microsoft’s initial filing, the court issued a temporary restraining order and preliminary injunction that enabled the company to seize a website connected to Storm-2139. Storm-2139’s access to the AI services was through “exploited exposed customer credentials scraped from public sources,” Microsoft said.
This Cyber News was published on therecord.media. Publication date: Thu, 27 Feb 2025 18:55:22 +0000