IBM X-Force hasn't seen any AI-engineered campaigns but mentions of AI and ChatGPT are proliferating on the dark web.
The X-Force Threat Intelligence Index 2024 report identified over 800,000 references to the emerging technology on illicit and dark web forums last year.
While X-Force does expect AI-enabled attacks in the near term, the real threat will emerge when AI enterprise adoption matures.
Though OpenAI's ChatGPT has become synonymous with generative AI, a competition is afoot to determine which large language models are the most effective and transparent.
In a test of 10 popular AI models, Google's Gemini outpaced competitors, followed by OpenAI's GPT-4 and Meta's Llama 2.
The test, created by Vero AI, measures the visibility, integrity, optimization, legislative preparedness, effectiveness and transparency of models.
Coca-Cola has a $1.1 billion partnership with Microsoft to use its cloud and generative AI services.
General Mills used Google's PaLM 2 model to deploy a private generative AI tool to its employees.
The threat intelligence firm expects that when a single AI technology reaches 50% market share - or when there are no more than 3 primary AI offerings - the cybercrime ecosystem will start developing tools and attacks to go after AI. AI can boost already dominant attack campaigns.
In February, Microsoft reported that hackers from North Korea, Iran and Russia were using Open AI to mount cybersecurity attacks, which the company then said they shut down.
Generative AI can turbo boost social engineering and phishing attacks.
Threat actors can tailor sophisticated phishing attacks by scraping data about users from all corners of the internet, matching pieces of information that might seem similar to know more about a person.
AI can be used to pretend to be a real person applying for a job at a company, which includes a nicely written cover letter and a PDF resume.
Threat actors can use generative AI to crack passwords, increase the launch volume of previously successful attacks, and get around cybersecurity defenses, she added.
Generative AI is also being used to fuel disinformation and misinformation campaigns, said Adam Meyers, CrowdStrike's senior vice president of counter adversary operations.
In the Crowdstrike 2024 Global Threat Report, the company tracked AI images related to the Israel and Hamas war.
While the faked images are relatively easy to spot - in some faked images, people having six fingers, for example - it's still being used, and most likely will continue to be refined through 2024 to disrupt elections.
That doesn't mean corporate cybersecurity professionals are off the hook, because attacks won't just be contained in the political arena.
If someone can make a deep fake of a country president, said Meyers, they can also potentially make a deep fake of a company president.
A malicious actor could use that deep fake on a Zoom call to get employees to do things like make money transfers, and time it to when the hackers know, via social engineering, that the president will be unavailable at that time.
This Cyber News was published on www.cybersecuritydive.com. Publication date: Thu, 09 May 2024 14:43:06 +0000