Tech Giants Unite for AI Safety, Pledge Kill Switch in Agreement

In a landmark international agreement on artificial intelligence safety, major tech companies including Microsoft, Amazon, and OpenAI have pledged to ensure the safe development of their most advanced AI models. The agreement was made at the Seoul AI Safety Summit and includes tech firms from various countries such as the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates.

The companies have agreed to publish safety frameworks outlining how they’ll measure the challenges of their frontier models, such as preventing misuse of the technology by bad actors. These frameworks will include “red lines” that define the kinds of risks associated with frontier AI systems, which would be considered “intolerable”. These risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

In response to such extreme circumstances, the companies plan to implement a “kill switch” that would cease the development of their AI models if they can’t guarantee mitigation of these risks. This agreement expands on a previous set of commitments made by companies involved in the development of generative AI software last November.

The commitments apply only to so-called frontier models, referring to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot. The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

This agreement marks a world first in AI safety, with so many leading AI companies from different parts of the globe agreeing to the same commitments. It ensures transparency and accountability in the plans to develop safe AI, contributing to the global efforts to mitigate the risks surrounding advanced AI systems.

Read more: www.cnbc.com