OpenAI Leaders Respond to Safety Leader’s Resignation Amid Controversy

OpenAI’s CEO Sam Altman and president and co-founder Greg Brockman have responded to the resignation of Jan Leike, the co-head of the Superalignment team. Leike had resigned citing disagreements with the company’s leadership over its core priorities. In a thread on X (formerly Twitter), Leike had expressed his concerns about the company’s focus on safety and claimed that his team struggled to obtain the resources to get their safety work done.

Altman and Brockman addressed Leike’s claims in a shared response on X. They expressed gratitude for Leike’s work and shared three points in response to the questions they received following the resignation.

Firstly, they stated that OpenAI has raised awareness about artificial general intelligence (AGI) so that the world can better prepare for it. They claimed to have repeatedly demonstrated the incredible possibilities of scaling up deep learning and analyzed their implications.

Secondly, they mentioned that they’re building foundations for the safe deployment of these technologies. They cited the work employees have done to bring GPT-4 to the world in a safe way as an example. They claimed that since the release of ChatGPT-4 in March 2023, the company has continuously improved model behavior and abuse monitoring in response to lessons learned from deployment.

Lastly, they acknowledged that the future is going to be harder than the past. They explained that OpenAI needs to keep elevating its safety work as it releases new models, and cited the company’s Preparedness Framework as a way to help do that. This framework predicts catastrophic risks that could arise and seeks to mitigate them.

Read more: mashable.com