A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales.
ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, widespread jailbreaks, and even more creative workarounds led to a proliferation of custom GPT models in 2023.
Until now, they were shared and enjoyed by individual tinkerers scattered around different corners of the internet.
The GPT store, launched yesterday, allows OpenAI subscribers to discover and create custom bots in one place.
Being under OpenAI's umbrella doesn't necessarily mean that these will provide the same levels of security and data privacy that the original ChatGPT does.
Looks, Acts Like ChatGPT, But Not ChatGPT OpenAI has not escaped its fair share of security incidents, but the walled garden of ChatGPT inspires confidence for users who like sharing personal information with robots.
The user interface for GPTs from the GPT store is the same as that of the proprietary model.
This benefit to user experience is potentially deceptive where security is concerned.
Not all your data is accessible to the third-party developers of these bots.
Further, because the company plans to monetize based on engagement, attackers might try to develop addictive offerings that conceal their maliciousness.
More Apps, More Problems OpenAI isn't the first company with an app store.
Whether its controls are as stringent as those of Apple, Google, and others is a question.
In the two months since OpenAI introduced customizable GPTs, the company claims, community members have already created more than 3 million new bots.
Despite his concerns about the vetting process, Paterson admits that one potential upside from the creation of the app store is it may raise the bar on third-party applications.
Paterson says that doesn't mean the apps will necessarily be secure.
This Cyber News was published on www.darkreading.com. Publication date: Thu, 11 Jan 2024 22:35:15 +0000