Google recently revealed that state-sponsored advanced persistent threat (APT) groups are using the company’s Gemini AI assistant to get help with coding for developing tools and scripts, perform research on publicly disclosed vulnerabilities, search for explanations of technologies, find details on target organizations, and search for methods to invade compromised networks. Solution developers have been using AI for almost a decade for purposes such as detecting unseen variations of malware samples, and more recently, with the advent of generative AI, cybersecurity vendors have found new ways to use AI to battle AI. Combine those attacks with threats based on deepfakes and even the potential poisoning of AI models, and it’s fair to say that generative AI has thus far been a mixed bag at best from a cybersecurity standpoint. But with rapid innovation, response to user feedback and a keen understanding of how AI can serve as a tool for protection, security vendors will continue seeking to gain the upper hand on cyberattackers who rely on AI for their criminal activities. The Acronis Threat Research Unit (TRU) is a team of cybersecurity experts specializing in threat intelligence, AI and risk management.
This Cyber News was published on www.bleepingcomputer.com. Publication date: Tue, 11 Mar 2025 14:10:14 +0000