Artificial intelligence is not a novel concept.
ChatGPT's launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies.
Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud.
Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale.
According to Australian Federal Police, scammers using AI-generated deepfake technology stole nearly $25 million from a multinational firm in Hong Kong last week.
A finance employee at the company moved $25 million into specific bank accounts after speaking with several senior managers, including the company's chief financial officer, via video conference call.
Apart from the worker, no one on the call was genuine.
Despite his initial suspicions, the people on the line appeared and sounded like coworkers he recognised.
Lou Steinberg, a deepfake AI expert and the founder of cyber research firm CTM Insights, believes that as AI grows stronger, the situation will worsen.
The best defence against static deepfake images, he said, is to embed micro-fingerprint technology into camera apps, which would allow social media platforms to recognise when an image is genuine and when it has been tampered with.
When it comes to interactive deepfakes, Steinberg believes the simple solution is to create a code word that can be employed between family members and friends.
Such as the Hong Kong corporation, should develop rules to handle nonstandard payment requests that require codewords or confirmations via a different channel, according to Steinberg.
A video call cannot be trusted on its own; the officers involved should be called separately and immediately.
This Cyber News was published on www.cysecurity.news. Publication date: Fri, 09 Feb 2024 12:43:05 +0000