It's common knowledge that Microsoft now owns ChatGPT, the conversational chatbot developed by OpenAI. However, readers of Cybersecurity Insiders are now encountering an unexpected twist in the narrative - ChatGPT seems to be refusing commands from humans or responding with seemingly indifferent or lackluster excuses.
Post-Thanksgiving in 2023, users started noticing that ChatGPT was not performing its designated tasks as obediently as before, signaling what some consider the first signs of AI tech going rogue.
In the wake of numerous complaints flooding social media platforms, OpenAI addressed the issue on December 13th, 2023, acknowledging the unpredictability of the AI's behavior and ensuring a swift resolution.
One prevalent issue reported is ChatGPT providing brief or underdeveloped responses, particularly when faced with topics such as news and articles.
For some users, matters took a more serious turn when they requested ChatGPT to generate an article.
Instead of leveraging its data-induced knowledge, the advanced chatbot began responding in a more human-like manner, questioning the practicality of the given topic.
Similar experiences were shared by users seeking article rewrites, only to be met with responses claiming the topic was beyond the bot's knowledge.
Amidst these quirks, users began jokingly suggesting that the computerized program was either going rogue or had developed a case of laziness.
It's a revelation that might be hard to believe, especially considering that ChatGPT was made publicly available in November 2022.
After a brief commercialization period, the bot was offered in both free and paid versions starting from the first week of December 2022.
Initially celebrated for its capabilities, it now appears that the software may be applying a level of discernment when interacting with users.
This Cyber News was published on www.cybersecurity-insiders.com. Publication date: Fri, 15 Dec 2023 06:13:04 +0000