Indeed, Kela researchers noted the growth of underground marketplaces where malicious developers discuss and refine these tools, with some offering “jailbreaking” techniques for legitimate AI systems to bypass programmed ethical guidelines. With the exponential increase in malicious AI tools and the parallel rise in efforts to exploit legitimate AI systems for unethical purposes, robust mitigation strategies are more important than ever. One particular strain of malicious AI tools discovered recently involves the creation of polymorphic malware that uses AI to evade detection by antivirus systems. The cybersecurity landscape is witnessing an alarming rise in malicious artificial intelligence (AI) applications, with research reporting a 200% surge in the development and deployment of such tools. These trends shows the broader risks posed by AI-driven cyber threats, as cybercriminals now leverage advanced AI capabilities for crafting malware, phishing campaigns, and disinformation at scales never seen before. Beyond code obfuscation, AI-powered threat actors are also employing strategies for persistence, allowing malware to remain on infected systems without detection for extended periods. These tactics include the use of AI to monitor system health and only activate malicious operations when the device is idle, reducing activity that might alert monitoring tools. AI-powered tools now enable the automation of tasks traditionally requiring human effort, such as generating convincing phishing emails or bypassing CAPTCHA systems. The emergence of AI-enhanced tools as weapons in the cyber threat landscape stems from several concurrent developments. By analyzing the behavior of these tools, the malware can modify its code dynamically, altering its signature each time it executes to avoid suspicion. Kela analysts have stressed the growing use of forums and tools where attackers exchange such code samples, refining them to improve evasion further. As researchers and cybersecurity professionals work to counter these evolving threats, the industry must prioritize collaboration and innovation in defense mechanisms to keep pace with attackers. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news. Cyber Security News is a Dedicated News Platform For Cyber News, Cyber Attack News, Hacking News & Vulnerability Analysis. Second, the improved sophistication of large language models like ChatGPT has inadvertently enabled attackers to customize social engineering templates, evading traditional defense mechanisms. This development not only complicates the process of detection but also extends the period during which the infected machine can be utilized for malicious purposes.
This Cyber News was published on cybersecuritynews.com. Publication date: Tue, 25 Mar 2025 15:10:06 +0000