Enterprise organizations aren't alone in embracing generative AI. Cybercriminals doing so, too.
They're using GenAI to shape their attacks, such as creating more convincing phishing emails, spreading disinformation to model poisoning, and creating prompt injections and deepfakes.
Threat researchers with cybersecurity firm Sysdig recently detected bad actors using stolen credentials to target large language models, with the eventual goal of selling the access to other hackers.
LLMs are foundational to the myriad generative tools coming onto the market since OpenAI launched ChatGPT 18 months ago.
According to Alessandro Brucato, senior threat research engineer at Sysdig, access to the compromised LLM accounts could be used for a number of reasons, such as to steal money or LLM training data.
The stolen cloud credentials were obtained through a vulnerable version of Lavarvel, a free and open source PHP-based web framework for creating web applications.
The stolen credentials could be used to target 10 cloud-hosted LLM services.
Threat actors used tools to generate requests that could target models during the attacks.
Sysdig researchers also found a script that could check credentials for the 10 AI services so see which were useful to the attackers.
The services are designed to give developers easy access to models using LLMs. They have simple user interfaces to let developers start building applications quickly.
For a cloud vendor to run a model, it needs to be submitted for approval.
The process for interacting with the hosted language models also is simple, using command-line interface commands.
The Sysdig researchers also discovered the use of a reverse proxy for LLMs being used to provide access to the compromised accounts.
The checking code used by the bad actors to verify whether credentials can be used to target particular LLMs also references the OAI Reverse Proxy open source project.
Once in the cloud environment, the hackers subtly poked around to see what they could do while not triggering warnings.
They also worked to see how the service was configured.
At this point, it could cost the victim more than $46,000 per day.
There are a number of ways to prevent such an attack, including using strong vulnerability management to prevent initial access and secrets management to ensure credentials are not stored in ways that allow them to be easily stolen.
Organizations also can use cloud security posture management or cloud infrastructure entitlement management tools to make sure cloud accounts have the fewest number of permissions needed.
Such protections will be needed, according to Brucato.
This Cyber News was published on securityboulevard.com. Publication date: Mon, 13 May 2024 18:43:06 +0000