The Federal Trade Commission is warning AI companies against secretly changing their security and privacy policies in hopes of leveraging the data they collect from customers to feed models they use to develop their products and services.
Surreptitiously amending terms of service without notifying customers is not unusual in the business world and AI companies' insatiable need for data makes them vulnerable to looking at the massive amounts of information they collect from consumers and businesses to fuel their innovation, the FTC's Office of Technology and The Division of Privacy and Identity Protection wrote in a column this week.
The agency equated these companies' need for huge amounts of new data to the decades-long need to find new oil deposits.
Changing the terms of service so they can use the data for their models might seem like a good answer to some of these organizations, but the FTC will crack down - and has in the past - on companies that do this without giving users the proper notice.
Concerns about the security and privacy of the data used to train large-language models and to use the rapidly expanding universe of tools like OpenAI's ChatGPT and Google's Gemini has been at the forefront over the past year as innovation of and the market around generative AI has exploded.
The worries have ranged from data leaking from AI models to threat groups using generative AI tools to improve their malicious activities.
Menlo Security in a report this week outlined how common it's become for people and companies using generative AI platforms to expose sensitive or proprietary corporate data.
Microsoft and OpenAI detailed how state-sponsored cybercriminal gangs are leveraging such tools in their attacks.
There are numerous examples of data of AI technology users being exposed.
An attack on OpenAI in March 2023 compromised the personal and payment information of 1.2% of ChatGPT Plus subscribers and cybersecurity firm Group-IB reported three months later it found as many as 100,000 compromised ChatGPT user accounts for sale on the dark web.
Wiz researchers in September reported that Microsoft's AI team accidentally exposed 38 terabytes of private data while publishing open source training data on GitHub.
The need to protect such data is critical and the FTC wants to ensure that AI companies understand what's expected of them.
Keeping an Eye on AI. This isn't the first time the agency has put AI companies on notice.
Last month, the FTC noted model-as-a-service companies - those who develop and host AI models that become available to third parties through an API or end-user interface - face the same pressures of continuously ingesting new data that dog all AI organizations and that they needed to abide by their terms of service and privacy policies.
The incentive to develop new or customer-specific models or to refine existing ones by ingesting more new data can conflict with companies' obligations to protect users' data.
This Cyber News was published on securityboulevard.com. Publication date: Thu, 15 Feb 2024 16:13:04 +0000