While governments worry about the unrealistic prospect of artificial intelligence triggering Armageddon, generative AI tools actually present an imminent threat to their citizens.
As with any technology evolution, cybercriminals are already using AI to improve their scams - and general internet users are the most at risk.
One of the easiest ways to spot a phishing email is to look at the spelling and grammar.
Very often, fake emails send by scammers contain obvious spelling and grammar errors - sometimes they even spell the name of the organization they are impersonating incorrectly.
Generative AI tools like ChatGPT and Google Bard do not make those kinds of mistakes.
These AI models have been trained very carefully to 'write' correctly, preventing spelling and grammar errors from appearing in their output.
This means that cybercriminals can rely on these tools to create accurate messages that are even more convincing - and therefore more likely to trick people into becoming victims of their scams.
Links in emails make it easy to access specific information quickly - and that's exactly why scammers use them to trick you into visiting a fake website.
Many online scams encourage you to visit infected websites or to download malware onto your computer.
By installing software updates whenever they are available, you can fix many of the security vulnerabilities that scammers rely on to steal your data.
You can add an additional layer of protection by installing antivirus software.
This will automatically scan downloads and alert you if it detects malware or other suspicious activity.
You can download a free trial of Panda Dome antimalware here.
Generative AI bots can do much more than simply writing effective emails and letters.
Some can even write computer code that programmers can use to accelerate their application development processes.
Initially some experts were concerned that these code writing capabilities would allow low-skill criminals to create sophisticated ransomware viruses and hacking tools.
The good news is that this does not appear to be the case.
The UK's National Cyber Security Centre, the government body responsible for providing security guidance to British organizations, has confirmed they do not believe generative AI will make malicious apps any more effective - on a technical level.
They do warn that AI tools will help scammers identify more new potential victims.
Generative AI does present an increased security risk to us all - but by following the steps described above, you are at far less risk of becoming a victim.
This Cyber News was published on www.pandasecurity.com. Publication date: Mon, 12 Feb 2024 09:43:04 +0000