Nov 22 - Ahead of OpenAI CEO Sam Altman's four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy. Advertisement Scroll to continue Some at OpenAI believe Q* could be a breakthrough in the startup's search for what's known as artificial general intelligence, one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said. Advertisement Scroll to continue Reuters could not independently verify the capabilities of Q* claimed by the researchers. Researchers consider math to be a frontier of generative AI development. This could be applied to novel scientific research AI researchers believe. In their letter to the board, researchers flagged AI's prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. Advertisement Scroll to continue Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said. Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing.
This Cyber News was published on www.reuters.com. Publication date: Thu, 30 Nov 2023 23:19:27 +0000