Now one of the fastest-growing forms of adversarial AI, deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate.
Deloitte sees deep fakes proliferating in the years ahead, with banking and financial services being a primary target.
Deepfakes typify the cutting edge of adversarial AI attacks, achieving a 3,000% increase last year alone.
The latest generation of generative AI apps, tools and platforms provides attackers with what they need to create deep fake videos, impersonated voices, and fraudulent documents quickly and at a very low cost.
Pindrops' 2024 Voice Intelligence and Security Report estimates that deep fake fraud aimed at contact centers is costing an estimated $ 5 billion annually.
Their report underscores how severe a threat deep fake technology is to banking and financial services.
Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry.
Adversarial AI creates new attack vectors no one sees coming and creates a more complex, nuanced threatscape that prioritizes identity-driven attacks.
Unsurprisingly, one in three enterprises don't have a strategy to address the risks of an adversarial AI attack that would most likely start with deepfakes of their key executives.
Ivanti's latest research finds that 30% of enterprises have no plans for identifying and defending against adversarial AI attacks.
Using a deepfake as part of an orchestrated strategy that includes phishing, software vulnerabilities, ransomware and API-related vulnerabilities is becoming more commonplace.
VentureBeat regularly hears from enterprise software cybersecurity CEOs who prefer to stay anonymous about how deepfakes have progressed from easily identified fakes to recent videos that look legitimate.
Voice and video deepfakes appear to be a favorite attack strategy of industry executives, aimed at defrauding their companies of millions of dollars.
Adding to the threat is how aggressively nation-states and large-scale cybercriminal organizations are doubling down on developing, hiring and growing their expertise with generative adversarial network technologies.
Of the thousands of CEO deepfake attempts that have occurred this year alone, the one targeting the CEO of the world's biggest ad firm shows how sophisticated attackers are becoming.
In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity practitioners defend systems while also commenting on how attackers are using it.
CrowdStrike's Intelligence team has invested a significant amount of time in understanding the nuances of what makes a convincing deep fake and what direction the technology is moving to attain maximum impact on viewers.
CrowdStrike is known for its deep expertise in AI and machine learning and its unique single-agent model, which has proven effective in driving its platform strategy.
With such deep expertise in the company, it's understandable how its teams would experiment with deep fake technologies.
Enterprises are running the risk of losing the AI war if they don't stay at parity with attackers' fast pace of weaponizing AI for deepfake attacks and all other forms of adversarial AI. Deepfakes have become so commonplace that the Department of Homeland Security has issued a guide, Increasing Threats of Deepfake Identities.
This Cyber News was published on venturebeat.com. Publication date: Mon, 01 Jul 2024 23:13:05 +0000