OpenAI has announced the formation of a Safety and Security Committee as it commences training its next AI model. The committee, led by CEO Sam Altman, includes directors Bret Taylor, Adam D’Angelo, and Nicole Seligman, along with OpenAI’s technical and policy experts. The committee’s first task is to evaluate and further develop OpenAI’s existing safety practices over the next 90 days. After the board’s review, OpenAI will publicly share an update on adopted recommendations.
This move follows the departure of former Chief Scientist Ilya Sutskever and Jan Leike, leaders of OpenAI’s Superalignment team, which ensured AI stays aligned with the intended objectives. The Superalignment team was disbanded earlier in May, with some team members being reassigned to other groups.
The committee will be responsible for making recommendations to the board on safety and security decisions for OpenAI’s projects and operations. Its formation is seen as a critical step in addressing safety concerns as OpenAI begins training its next artificial intelligence model. The new model is expected to surpass the capabilities of the current GPT-4 system that underpins its ChatGPT chatbot.
Read more: www.nytimes.com