While many enterprises have relied on (insecure) log files, many are now embracing telemetry data, such as network traffic intelligence from deep packet inspection (DPI) technology, because it provides the "ground truth" upon which to build effective AI defenses. By adopting a zero-trust model, enhancing data quality, and utilizing AI-driven predictive analytics, organizations can proactively counter these sophisticated attacks and protect their assets — and reputations — in an increasingly perilous digital landscape. As cybercriminals finesse the use of generative AI (GenAI), deepfakes, and many other AI-infused techniques, their fraudulent content is becoming disconcertingly realistic, and that poses an immediate security challenge for individuals and businesses alike. A common type of attack is the fraudulent use of biometric data, an area of particular concern given the widespread use of biometrics to grant access to devices, apps, and services. Rather than mimicking the identities of real people, as in the previous examples, cybercriminals are using biometric data to inject fake evidence into a security system. In a zero-trust world, telemetry data, like the kind supplied by DPI, provides the right kind of "never trust, always verify" foundation to fight the rising tide of deepfakes. Data quality plays a critical role in pattern recognition, anomaly detection, and other machine learning-based methods used to fight modern cybercrime. As deepfakes and AI-based cyber threats escalate, businesses must leverage advanced data analytics to strengthen their defenses. Predictive AI models can predict potential vulnerabilities — or even future attack vectors — before they are exploited, enabling pre-emptive security measures such as using game theory or honeypots to divert attention from the valuable targets. Deepfakes, in keeping with many AI-based threats, are effective because they work in combination with other tried-and-tested scamming techniques, such as social engineering and fraudulent calls. Enterprises need to be able to confidently detect subtle behavior changes taking place across every facet of their network in real time, from users to devices to infrastructure and applications. However, to exploit this defensive advantage, they must address the quality of the data feeding their AI models. In another, IDs created without facial recognition biometrics on Aadhar, India's flagship biometric ID system, allowed criminals to open fake bank accounts. The volume and patterns of data across a given network are a unique signifier particular to that network, much like a fingerprint. In one example, a convicted fraudster in the state of Louisiana managed to use a mobile driver's license and stolen credentials to open multiple bank accounts, deposit fraudulent checks, and buy a pick-up truck.
This Cyber News was published on www.darkreading.com. Publication date: Wed, 02 Oct 2024 23:00:21 +0000