In this Help Net Security interview, Matt Holland, CEO of Field Effect, discusses achieving a balance for businesses between the advantages of using AI in their cybersecurity strategies and the risks posed by AI-enhanced cyber threats.
Holland also explores how education, awareness, and implemented measures prepare organizations for these evolving challenges.
There's a lot of buzz around AI supercharging cyberattacks.
There's a lot of hype around AI and LLMs with regards to what they'll enable threat actors to do.
These tools aren't going to suddenly give the bad guys a way to build exploit chains they can then package up as products they can sell to other hackers, nor will they let them create malware that can magically evade all known detection techniques.
The advantages AI offers to an attacker are just as available to a defender.
You've got tools that can help automate detection at scale, something that human analysis alone isn't particularly well-suited towards.
Cybersecurity companies are already putting these tools to use to spot patterns and anomalies that could otherwise slip by human detection.
AI gives these companies a way to distill highly technical alert information into something far more digestible to the average IT worker who may not have a ton of security expertise but is still tasked with managing a solution.
On the other hand some cybersecurity firms are going to have to up their detection game-AI tools that can draft convincing phishing messages mean that you can't rely on typos alone to spot an attempt.
When it comes to using AI as part of your cybersecurity strategy, you've got to consider the risks-data governance is a big consideration, as is the legal risk of using generative AI output.
AI tools need additional consideration around the training data they're built on, their overall security, and how they approach intellectual property and sensitive data.
One, implement essential cybersecurity controls-these are fundamental to proactive, effective defense.
The impact AI will have on numerous fields is still uncertain, and so are the associated risks.
It's important that companies establish clear policies around AI use, and that they continuously review the tools they employ that are leveraging AI-and make sure that employees understand the policies and why they're in place.
Attackers are going to continue using AI, which will help them scale their efforts and create more convincing scams.
It's unavoidable that the cybersecurity industry will have to adopt these technologies to some extent in response.
The immediate benefit to defenders here is that AI has the potential to provide a major helping hand in threat detection-AI can handle way more data than a human ever could, after all.
Any AI-driven solution used in isolation from human expertise is a recipe for disaster.
AI still makes assumptions and leaps of logic that don't quite add up, and as such, human expertise and oversight is still needed to guide any cybersecurity program.
This Cyber News was published on www.helpnetsecurity.com. Publication date: Tue, 12 Dec 2023 05:58:11 +0000