Deepfakes mean biometric security measures won't be enough The Register

Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections.
Or so says consultancy and market watcher Gartner, as deepfakes dominate the news since sexually explicit AI-generated viral images of popstar Taylor Swift prompted fans, Microsoft, and the White House to call for action.
The relentless march of AI technology can also be the cause of headaches for enterprise security.
Remote account recovery, for example, might rely on an image of the individual's face to unlock security.
These approaches could now be duped by AI deepfakes and need to be supplemented by additional layers of security, Gartner's VP Analyst Akif Khan told The Register.
He said that defense against the new threat can come from supplementing existing measures or improving on them.
Other examples of supplementary security might include looking at device location or frequency of requests from the same device, he said.
Security system developers are also trying to use AI - typically deep neural networks - to inspect the presented images to look for signs that they are deepfakes.
Organizations should use both approaches to defend against deepfake threats to biometric security, he said.


This Cyber News was published on go.theregister.com. Publication date: Thu, 01 Feb 2024 19:13:04 +0000


Cyber News related to Deepfakes mean biometric security measures won't be enough The Register