The new committee comes in the wake of two key members of the Superalignment team - OpenAI co-founder Ilya Sutskever and AI researcher Jan Leike - left the company.
The shutting down of the superalignment team and the departure of Sutskever and Leike - and now the creation of an executive-led safety and security group - are only the latest moments in an ongoing in-house drama that burst into public view with Altman's firing as CEO by members of the board at the time saying he had not been open with them and reports that some in the company - including Sutskever - were pushing the development of OpenAI's technologies too quickly, with the innovation outpacing the development of controls necessary to ensure that AI can be used safely.
Less than a week later, Altman was back as CEO, with a revamped board in place and some executives being let go.
Two of the former board members, speaking to The Economist, said they were concerned that OpenAI - as well as high-profile AI companies like Microsoft and Google - is innovating to rapidly to take into account adverse effects that could come with the technology.
Helen Toner, with Georgetown University's Center for Security and Emerging Technology, and tech entrepreneur Tasha McCauley argued that AI companies can't self-govern and that government oversight is needed.
The rollout of AI can't be controlled only by private companies, Toner and McCauley said.
AI - particularly in this relatively new era of generative AI - has generated almost as much security and safety concerns as it has excitement about its potential.
Those concerns span everything from bias and discrimination in their outputs to hallucinations - made-up answers that are wrong - data security leaks and sovereignty compliance worries, and the use of the technology by threat groups.
It's unclear whether the new Safety and Security Committee will ease any of those concerns.
Ilia Kolochenko, co-founder and CEO of IT security firm ImmuniWeb, called OpenAI's move welcome but questioned its societal benefits.
OpenAI said the new committee's first step will be evaluating and improving OpenAI's processes and safeguard over 90 days and then bring recommendations back to the full board, with OpenAI publicly sharing the recommendations that were approved.
The Worry About AGI. The company noted that the committee comes in just as OpenAI begins to train its next frontier model that will succeed GPT-4 and bring the company even closer to achieved artificial general intelligence, the point where AI systems can learn, understand, and perform as well as humans, only much faster.
Reaching that point has been a destination for Altman and OpenAI, though it brings up myriad concerns about what it could mean for societies and humanity itself.
Frontier models are the most cutting edge in AI that are designed push the evolution of AI systems forward.
This Cyber News was published on securityboulevard.com. Publication date: Tue, 28 May 2024 19:43:08 +0000