AI's efficacy is constrained in cybersecurity, but limitless in cybercrime

Security teams then use more AI in response to the AI-driven threats, and threat actors augment their AI to keep up, and the cycle continues.
There are trust issues with AI security solutions, and the data models used to develop AI-powered security products appear to be perennially at risk.
In contrast, threat actors are taking advantage of AI with almost zero limitations.
Many organizations are skeptical about security firms' AI-powered products.
This is understandable because several of these AI security solutions are overhyped and fail to deliver.
One of the most advertised benefits of these products is that they simplify security tasks so significantly that even non-security personnel will be able to complete them.
This is difficult to achieve given the evolving nature of threats, as well as various factors that weaken a security posture.
Almost all AI systems still require human direction, and AI is not capable of overruling human decisions.
AI-aided SIEM may accurately point out anomalies for security personnel to evaluate; however, an inside threat actor can prevent the proper handling of the security issues spotted by the system, rendering the use of AI in this case practically futile.
By leveraging machine learning to scale up security operations and ensure more efficient detection and response processes over time, XDR provides substantial benefits that can help ease the skepticism over AI security products.
Security solution vendors competing in the crowded market also try to get their products out as soon as possible, with all the bells and whistles they can offer, but with little to no regard for data security.
Organizations can turn to free threat intelligence sources and reputable cybersecurity frameworks like MITRE ATT&CK. In addition, to reflect behavior and activities specific to a particular organization, AI can be trained on user or entity behavior.
On the security front, there are many solutions that can successfully keep data breach attempts at bay, but these tools alone are not enough.
Ongoing government-initiated talks for AI regulation and the proposed AI security regulatory framework by MITRE are steps in the right direction.
An AI security system may automatically redact links in an email or web page after detecting risks, but human users can also ignore or disable this mechanism.
Organizations can hold regular cybersecurity training to ensure that employees are using security best practices and help them become more adept in detecting threats and evaluating incidents.
Using AI to fight cyber threats will always be challenging due to various factors, including the need to establish trust, the caution needed when using data for machine learning training, and the importance of human decision-making.
Trust can be built with the help of standards and regulations, as well as the earnest efforts of security providers in showing a track record of delivering on their claims.
Data models can be secured with sophisticated data security solutions.
The vicious cycle remains in motion, but we can find hope in that it also applies in the reverse: as AI threats continue to evolve, AI cyber defense will be evolving as well.


This Cyber News was published on www.helpnetsecurity.com. Publication date: Wed, 20 Dec 2023 07:13:05 +0000


Cyber News related to AI's efficacy is constrained in cybersecurity, but limitless in cybercrime