ID R&D introduces voice clone detection to protect users against audio deepfakes

ID R&D introduced voice clone detection as a new option for its IDLive Voice liveness detection product.
Detecting voice clones and audio deepfakes can prevent fraud and crime, deter bad actors, and help preserve trust in the authenticity of digital audio communication.
The software processes a recording of speech and uses AI to determine whether it was spoken by a person or a voice clone.
The covert use of a voice clone is a strong indicator of criminal intent.
Generative AI has substantially accelerated the advancement of audio deepfakes and voice clones.
Today's voice clones, created using just a short audio sample of a person's voice, are virtually impossible for people to differentiate from their source.
It is a powerful technology with many compelling applications in communications, healthcare, and productivity, particularly when combined with text-to-speech and conversational AI. But it can also be used to commit fraud and other crimes by impersonating people without their consent.
President Biden has issued an Executive Order that established standards in AI security and safety in an effort to protect Americans privacy while seizing the promise of AI. Bruce Reed, White House deputy chief of staff spearheading the AI strategy of the Biden administration, recently said that what worries him most about AI is voice cloning.
ID R&D's IDLive Voice clone detection product is delivered as an SDK and operates on the customer premise either as a standalone function or in concert with ID R&D's other voice biometrics and liveness products.
Just three seconds of a person speaking can be submitted to the API to get a score indicating the likelihood that it was created using cloning technology from among the leading providers.
It can detect clone attacks using replays or more advanced software- and hardware-based hacking methods.
The detection operates on audio played into a mobile device or any other device with a microphone.


This Cyber News was published on www.helpnetsecurity.com. Publication date: Tue, 09 Jan 2024 15:43:04 +0000


Cyber News related to ID R&D introduces voice clone detection to protect users against audio deepfakes

ID R&D introduces voice clone detection to protect users against audio deepfakes - ID R&D introduced voice clone detection as a new option for its IDLive Voice liveness detection product. Detecting voice clones and audio deepfakes can prevent fraud and crime, deter bad actors, and help preserve trust in the authenticity of digital ...
1 year ago Helpnetsecurity.com
9 Best DDoS Protection Service Providers for 2024 - eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More. One of the most powerful defenses an organization can employ against distributed ...
1 year ago Esecurityplanet.com
Meet the UC Berkeley professor tracking election deepfakes - Not in recent history has a technology come along with the potential to harm society more than deepfakes. The manipulative, insidious AI-generated content is already being weaponized in politics and will be pervasive in the upcoming U.S. Presidential ...
1 year ago Venturebeat.com
Voice Assistants and Privacy: Striking the Right Balance - The pervasive presence of voice assistants in our lives is a testament to the power of technology and its potential for furthering human progress. Voice assistants are digital, voice-controlled devices that allow users to interact with a virtual ...
1 year ago Securityzap.com
Deep dive into synthetic voice phishing defense - Voice phishing attacks are an escalating threat and this alarming statistic highlights a pervasive lack of awareness among the general population. At the moment, different techniques are being used by both big and small businesses to fight back ...
1 year ago Cybersecurity-insiders.com
AI, Deepfakes and Digital ID: The New Frontier of Corporate Cybersecurity - iD. The emergence of deepfakes fired the starting pistol in a cybersecurity arms race. Deepfakes will intensify the already acute pressure placed on trust and communication in the public sphere. Because of this focus, what risks being missed is the ...
8 months ago Cyberdefensemagazine.com
New infosec products of the week: January 12, 2024 - Here's a look at the most interesting products from the past week, featuring releases from Critical Start, Dasera, ID R&D, and SpecterOps. SpecterOps announced updates to BloodHound Enterprise that add new Attack Paths focused on Active Directory ...
1 year ago Helpnetsecurity.com
Worried About AI Voice Clone Scams? Create a Family Password - It's a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim's common sense and make them more likely to send money. There's an easy and old-school trick you can use to preempt the scammers: ...
1 year ago Eff.org
AI and deepfakes: How to be AI-savvy - Webroot Blog - Services like Webroot’s identity protection help you monitor for suspicious identity theft activity, keeping an eye on things like the Dark Web, financial transactions, and credit bureau data. So, how can you protect yourself from AI-driven scams ...
4 months ago Webroot.com
McAfee Project Mockingbird defends users against AI-generated scams and disinformation - McAfee announced its AI-powered Deepfake Audio Detection technology, known as Project Mockingbird. This new, proprietary technology was developed to help defend consumers against the surging threat of cybercriminals utilizing fabricated, AI-generated ...
1 year ago Helpnetsecurity.com
LastPass: Hackers targeted employee in failed deepfake CEO call - LastPass revealed this week that threat actors targeted one of its employees in a voice phishing attack, using deepfake audio to impersonate Karim Toubba, the company's Chief Executive Officer. While 25% of people have been on the receiving end of an ...
9 months ago Bleepingcomputer.com
Tech Companies Sign Accord to Combat AI-Generated Election Trickery - Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other ...
11 months ago Securityweek.com
The dangers of voice fraud: We can't detect what we can't see - Despite these concerns, there's a more subtle and potentially more deceptive threat looming: voice fraud. Unlike high-definition video, the typical transmission quality of audio, especially in phone calls, is markedly low. The inherent imperfections ...
7 months ago Venturebeat.com
Daon xSentinel minimizes generative AI voice fraud - Daon announced the addition of xSentinel, an expansion of its AI.X technology. xSentinel provides adaptive synthetic voice protection to create a layer of defense within any voice communication channel and enhance the identity verification ...
1 year ago Helpnetsecurity.com
CVE-2007-0018 - Stack-based buffer overflow in the NCTAudioFile2.AudioFile ActiveX control (NCTAudioFile2.dll), as used by multiple products, allows remote attackers to execute arbitrary code via a long argument to the SetFormatLikeSample function. NOTE: the ...
6 years ago
Deepfake attacks will cost $40 billion by 2027 - Now one of the fastest-growing forms of adversarial AI, deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate. Deloitte sees deep fakes proliferating ...
7 months ago Venturebeat.com
FTC offers $25,000 prize for detecting AI-enabled voice cloning - The U.S. Federal Trade Commission has started accepting submissions for its Voice Cloning Challenge, a public competition with a $25,000 top prize for ideas that protect consumers from the danger of AI-enabled voice cloning for fraudulent activity. ...
1 year ago Bleepingcomputer.com
Google Online Security Blog: I/O 2024: What's new in Android security and privacy - As their tactics evolve in sophistication and scale, we continually adapt and enhance our advanced security features and AI-powered protections to help keep Android users safe. Today, we're announcing more new fraud and scam protection features ...
8 months ago Security.googleblog.com
Is Imitation A Form Of Flattery? Scarlett Johansson Doesn't Think So - It all started when Open AI's CEO Sam Altman unveiled a new ChatGPT version that included a new voice assistant seemingly inspired by the movie Her. Controversy started bubbling over how Scarlett Johansson's AI assistant character influenced ...
8 months ago Blog.avast.com
FTC soliciting contest submissions to help tackle voice cloning technology - The Federal Trade Commission is now accepting submissions for a contest designed to spur development of products and policies to protect consumers from the malicious use of voice cloning technology, which has been fueled by the advance of ...
1 year ago Therecord.media
A primer on storage anomaly detection - Anomaly detection plays an increasingly important role in data and storage management, as admins seek to improve security of systems. In response to these developments, more vendors incorporate storage anomaly detection capabilities into their ...
1 year ago Techtarget.com
New infosec products of the week: February 23, 2024 - Here's a look at the most interesting products from the past week, featuring releases from ManageEngine, Metomic, Pindrop, and Truffle Security. Pindrop Pulse offers protection against audio deepfakes. Pindrop Pulse's ability to detect deepfakes ...
11 months ago Helpnetsecurity.com
Why It's More Important Than Ever to Align to The MITRE ATT&CK Framework - These missed attacks often stem from either hidden gaps in detection coverage - or due to alerts that got buried in a sea of noisy alerts and were never even pursued by the Security Operations Center team. In other words, we need to be able to report ...
1 year ago Cyberdefensemagazine.com
Deepfakes mean biometric security measures won't be enough The Register - Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections. Or so says consultancy and market ...
1 year ago Go.theregister.com
Deepfakes mean biometric security measures won't be enough The Register - Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections. Or so says consultancy and market ...
1 year ago Theregister.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)