Hugging Face dodged a cyber-bullet with Lasso Security's help

Further validating how brittle the security of generative AI models and their platforms are, Lasso Security helped Hugging Face dodge a potentially devastating attack by discovering that 1,681 API tokens were at risk of being compromised.
The tokens were discovered by Lasso researchers who recently scanned GitHub and Hugging Face repositories and performed in-depth research across each.
Researchers successfully accessed 723 organizations' accounts, including Meta, Hugging Face, Microsoft, Google, VMware, and many more.
Hugging Face has become indispensable to any organization developing LLMs, with over 50,000 organizations relying on them today as part of their devops efforts.
Serving as the definite resource and repository for large language model developers, devops teams, and practitioners, the Hugging Face Transformers library hosts over 500,000 AI models and 250,000 datasets.
Another reason why Hugging Face is growing so quickly is the popularity of its Transformers library being open-source.
Devops teams tell VentureBeat that the collaboration and knowledge sharing an open source platform provides accelerates LLM model development, leading to a higher probability that models will make it into production.
Attackers looking to capitalize on LLM and generative AI supply chain vulnerabilities, the possibility of poisoning training data, or exfiltrating models and model training data see Hugging Face as the perfect target.
With Hugging Face gaining momentum as one of the leading LLM development platforms and libraries, Lasso's researchers wanted to gain deeper insight into its registry and how it handled API token security.
In November 2023, researchers investigated Hugging Face's security method.
Poisoning training data would introduce potential vulnerabilities or biases that could compromise LLM and model security, effectiveness, or ethical behavior.
According to Lasso's research team, compromised API tokens are quickly used to attain unauthorized access, copying, or exfiltration of proprietary LLM models.
A startup CEO whose business model relies entirely on an AWS-hosted platform told VentureBeat it costs on average $65,000 to $75,000 a month in compute charges to train models on their AWS ECS instances.
Managing API tokens more effectively needs to start with how Hugging Face creates them by ensuring each is unique and authenticated during identity creation.
Focusing more on the lifecycle management of each token and automating identity management at scale will also help.
All the above factors are core to Hugging Face going all in on a zero-trust vision for their API tokens.
As Lasso Security's research team shows, greater vigilance isn't going to get it done when securing thousands of API tokens, which are the keys to the LLM kingdoms many of the world's most advanced technology companies are building today.
Hugging Face dodging a cyber incident bullet shows why posture management and a continual doubling down on least privileged access down to the API token level are needed.
Attackers know a gaping disconnect between identities, endpoints, and any form of authentication, including tokens.
The research Lasso released today shows why every organization must verify every commit to ensure no tokens or sensitive information is pushed to repositories and implement security solutions specifically designed to safeguard transformative models.


This Cyber News was published on venturebeat.com. Publication date: Mon, 04 Dec 2023 16:13:05 +0000


Cyber News related to Hugging Face dodged a cyber-bullet with Lasso Security's help