A group of employees from leading AI companies, OpenAI and Google DeepMind, have published a letter warning about the potential dangers of advanced AI. They allege that these companies are prioritizing financial gains while avoiding necessary oversight. The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” was signed by thirteen employees, including current and former employees of both companies.
The letter cautions that AI systems are powerful enough to pose serious risks without proper regulation. These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems, which could potentially result in human extinction.
The group alleges that AI companies have information about the risks of the AI technology they are working on, but because they aren’t required to disclose much to governments, the real capabilities of their systems remain a secret. They argue that current and former employees are the only ones who can hold the companies accountable to the public, yet many are restricted by confidentiality agreements that prevent them from voicing their concerns publicly.
The employees are calling on AI companies to offer solid whistleblower protections for speaking out about the risks of AI. They suggest that companies should avoid creating or enforcing agreements that prevent criticism for risk-related concerns, offer a verifiably anonymous process for employees to raise risk-related concerns to the board, regulators, and independent organizations with relevant expertise, and support a culture of open criticism to allow employees to raise risk-related concerns about technologies to the public, the board, regulators, and more, as long as trade secrets are protected. They also urge companies to avoid retaliating against employees who publicly share risk-related confidential information after other processes have failed.
Read more: time.com