Generative AI engines similar to OpenAI's ChatGPT and Google's Bard will become indispensable tools for enterprises and cybersecurity operations in detecting and analyzing malicious code in a real-world environment, according to researchers with crowdsourced threat intelligence platform VirusTotal.
The Google-owned organization over the past several months has integrated three AI engines - starting with Code Insight in April - into its operations to help with code analysis and over the following six months found they significantly added to its capabilities to not only detect and analyze potential threats but also to summarize what they find.
"The three AI engines implemented in VirusTotal were designed for code analysis, and we included them in the analysis pipeline for any suspicious script," researchers wrote in a report released this week.
"The fantastic capability of AI engines for writing code is also reflected in their capability to 'understand' it and explain in natural language."
This resulted in an "Incredible amount of time saved for analysts, who now can more quickly understand what the suspicious code does," they wrote.
"There is another important angle to this: AI engines, unlike other more traditional security tools, provide a detailed explanation instead of a 'binary' verdict, which allows human analysts to make a decision in certain gray cases."
AI is expected to be a boon for both defenders and threat actors, with a report by financial services giant Morgan Stanley pointing to Acumen Research and Consulting numbers estimating the market for AI in cybersecurity will grow from $14.9 billion in 2021 to $133.8 billion by 2030.
"Cybersecurity organizations increasingly rely on AI in conjunction with more traditional tools such as antivirus protection, data-loss prevention, fraud detection, identity and access management, intrusion detection, risk management and other core security areas," the report's authors wrote, adding that AI's ability to find patterns within massive datasets makes it useful for everything from detecting attacks better than humans to identifying and flagging suspicious emails and messages used in phishing campaigns.
For VirusTotal, the goal was to see if generative AI's capabilities in writing code could translate into analyzing and explaining it.
With the AI engines churning through hundreds of thousands of malware samples over six months, the researchers found the technology brought new functionality to the work that saved analysts significant amounts of time.
In particular, AI was 70% better at detecting and identifying malicious scripts than traditional methods alone and 300% better at finding and identifying such scripts that are trying to exploit common vulnerabilities.
"While the field is still rapidly evolving, AI engines have demonstrated remarkable potential for automating and enhancing various analysis tasks, particularly those that are time-consuming and challenging, such as deobfuscation and interpreting suspicious behavior," Vincent Diaz, threat intelligence analyst at VirusTotal, wrote in a blog post.
The findings back up what Google researchers found in the company's 2024 cloud security forecast, writing that "Cyber defenders will use generative AI related technologies to strengthen detection as well as speed up analysis and other time-consuming tasks, such as reverse engineering."
The VirusTotal researchers added in their report that AI - as it will in other areas of IT and business in general - will make it possible for people do tasks that they lack deep experience in.
"Malware analysis is a heavily time-consuming task and requires highly specialized knowledge and experience," they wrote.
"AI's ability to 'understand' suspicious script and explain it in natural language reduces not just the time taken in analyzing code, but also the level of knowledge needed to do so - making it possible, for the first time, for non-cybersecurity experts to spot and prevent malware attacks."
According to analytics and AI software and services vendor SAS, 63% of executives surveyed said their most significant skills shortage was in AI and machine learning.
Threat Groups Playing with AI. In its report, Morgan Stanley noted that threat groups also are using generative AI to help them with their nefarious efforts, including by improving social engineering schemes like phishing campaigns, hacking passwords, creating deepfakes, and poisoning data used in AI training models.
VirusTotal's Diaz wrote for organizations like his, determining whether malware is generated by AI is complex because it's difficult to trace the origins of source code.
"Instead, we've encountered malware families employing AI themes for distribution, exploiting the current trend of AI-based threats," he wrote, noting hackers impersonating AI applications and services like ChatGPT and Bard.
This Cyber News was published on securityboulevard.com. Publication date: Fri, 01 Dec 2023 23:06:57 +0000