Anyone following the deployment of self-driving cars knows the technology is proving far from foolproof.
The issues, largely, are in the thousands of little variations in traffic patterns that speckle our driving lives, to which machines often fail to appropriately react.
A freak occurrence, to be certain, but one a reasonable human driver could've handled more safely.
As it turns out, the troubles in the world of self-driving cars mirror exactly the problem with how we're currently addressing artificial intelligence in a cybersecurity environment.
There is so much hype around the technology that we've failed to root our discussions-and expectations- in a realistic view of security issues.
Just like how self-driving cars can't know how to decipher every human-caused variation in our daily driving lives, AI can never fully protect us from human errors that compromise our systems.
Those errors are often fueled by the unpredictable variable that is human emotion.
What AI will do, and quickly, is identify the gaps in our current security capabilities.
The trick is to keep the human in mind as we deploy this new technology.
It's often beyond our imagination how AI will interpret situations and go about responding to them.
When it comes to security, AI can't know when humans are going to make costly errors like, say, falling for an email or telephone phishing scam.
It can account for logic, but most human errors at their core are emotional.
The recent Okta breach, which exposed the data of 134 of its customers, offers a perfect example: Hackers were able to access credentials through a service account saved to an employee's personal Google profile, which the employee had logged into on a company laptop, presumably out of convenience.
There is no one tool that will solve all of our security problems.
We seem to have forgotten that businesses are made up of real humans who make real mistakes.
Over time, I suspect we'll be able to talk to AI in plain language about security challenges and receive responses on how we can better respond to threats or breaches.
AI eventually will be very good at pointing out errors and warning us of potential security problems or dangerous scenarios.
It will never be able to stop all emotion-based human error.
Our response plans should be taking into account not only the best in automation, detection, and tooling, but also how a change could impact various pieces of an organization.
What has always been true remains so: Cybersecurity is an ever-evolving thing, and it requires an incredible amount of human diligence to properly operate and defend an organization.
This Cyber News was published on www.cybersecurity-insiders.com. Publication date: Sat, 06 Jan 2024 23:13:35 +0000