At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors.
In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models, and fraud.
Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape.
We are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security, to elevate defenders everywhere.
In line with Microsoft's leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft's policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats, advanced persistent manipulators, and cybercriminal syndicates we track.
Identification and action against malicious threat actors' use: Upon detection of the use of any Microsoft AI application programming interfaces, services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources.
Notification to other AI service providers: When we detect a threat actor's use of another service provider's AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data.
Transparency: As part of our ongoing efforts to advance responsible use of AI, Microsoft will inform the public and stakeholders about actions taken under these threat actor principles, including the nature and extent of threat actors' use of AI detected within our systems and the measures taken against them, as appropriate.
Microsoft Threat Intelligence tracks more than 300 unique threat actors, including 160 nation-state actors, 50 ransomware groups, and many others.
Consistent with preventing threat actors' actions across our technologies and working closely with partners, Microsoft continues to study threat actors' use of AI and LLMs, partner with OpenAI to monitor attack activity, and apply what we learn to continually improve defenses.
This blog provides an overview of observed activities collected from known threat actor infrastructure as identified by Microsoft Threat Intelligence, then shared with OpenAI to identify potential malicious use or abuse of their platform and protect our mutual customers from future threats or harm.
The threat ecosystem over the last several years has revealed a consistent theme of threat actors following trends in technology in parallel with their defender counterparts.
Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.
Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships.
At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.
The threat actors profiled below are a sample of observed activity we believe best represents the TTPs the industry will need to better track using MITRE ATT&CK® framework or MITRE ATLAS™ knowledgebase updates.
Emerald Sleet overlaps with threat actors tracked by other researchers as Kimsuky and Velvet Chollima.
LLM-informed reconnaissance: Engaging LLMs for queries on a diverse array of subjects, such as global intelligence agencies, domestic concerns, notable individuals, cybersecurity matters, topics of strategic interest, and various threat actors.
In closing, AI technologies will continue to evolve and be studied by various threat actors.
Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.
This Cyber News was published on www.microsoft.com. Publication date: Thu, 15 Feb 2024 21:13:04 +0000