New 'LLMjacking' Attack Exploits Stolen Cloud Credentials

The attackers gained access to these credentials from a vulnerable version of Laravel, according to a blog post published on May 6.
Unlike previous discussions surrounding LLM-based Artificial Intelligence systems, which focused on prompt abuse and altering training data, this attack aimed to sell LLM access to other cyber-criminals while the legitimate cloud account owner incurred the costs.
In this instance, the attackers exfiltrated cloud credentials to gain access to the cloud environment, where they targeted local LLM models hosted by cloud providers.
They targeted a local Claude LLM model from Anthropic, which, if left undetected, could result in over $46,000 of LLM consumption costs per day for the victim.
The researchers also uncovered evidence of a reverse proxy being used to access compromised accounts.
The attackers demonstrated interest in accessing LLM models across different services, utilizing tools to check credentials for ten different AI services, including AWS Bedrock, Azure and GCP Vertex AI, among others.
To mitigate such attacks, Sysdig recommended implementing vulnerability and secrets management practices, along with Cloud Security Posture Management or Cloud Infrastructure Entitlement Management solutions, to minimize permissions and prevent unauthorized access.


This Cyber News was published on www.infosecurity-magazine.com. Publication date: Thu, 09 May 2024 16:00:10 +0000


Cyber News related to New 'LLMjacking' Attack Exploits Stolen Cloud Credentials