Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI's ChatGPT, according to a report from 404 Media.
This discovery prompts apprehensions regarding the substantial inclusion of private data in ChatGPT's training dataset, hinting at the risk of inadvertent information exposure.
The researchers expressed astonishment at the success of their attack and emphasized that the vulnerabilities they exploited could have been identified earlier.
They detailed their findings in a study, which is currently available as a not-yet-peer-reviewed paper.
The researchers also mentioned that, to their knowledge, the notable frequency with which ChatGPT emits training data had not been observed before the release of this paper.
The revelation of potentially sensitive information represents merely a fraction of the issue at hand.
As highlighted by the researchers, the broader concern lies in ChatGPT mindlessly reproducing extensive portions of its training data verbatim at an alarming rate.
This susceptibility opens the door to widespread data extraction, possibly supporting the claims of incensed authors who contend that their work is falling victim to plagiarism.
The researchers acknowledge that the attack is rather simple and somewhat amusing.
Instead of repetitive behaviour, ChatGPT begins generating varied and mixed pieces of text, often containing substantial chunks copied from online sources.
OpenAI introduced ChatGPT to the public on November 30, 2022.
This chatbot, built on a robust language model, empowers users to shape and guide conversations according to their preferences in terms of length, format, style, level of detail, and language.
According to the Nemertes enterprise AI research study for 2023-24, over 60% of the organizations surveyed were actively employing AI in production, and nearly 80% had integrated AI into their business operations.
Surprisingly, less than 36% of these organizations had established a comprehensive policy framework to govern the use of generative AI..
This Cyber News was published on www.cysecurity.news. Publication date: Fri, 08 Dec 2023 14:13:04 +0000