OpenAI Does Not Know How its AI Thinks

OpenAI has admitted that it is still grappling with understanding how its AI technologies function. This came from the CEO of OpenAI, Sam Altman during the International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland.

Altman confessed that the company has not yet cracked the code of interpretability, meaning they are still figuring out how to trace back their AI models’ output and the decisions it made to arrive at those answers. This statement came in response to a question about how the company’s large language models (LLM) function.

Despite the billions of dollars invested in developing AI technologies that are revolutionizing the world, the challenge of understanding how the tech works remains. This issue is not unique to OpenAI but is a widespread problem in the emerging AI space. Researchers have long been trying to explain the complex “thinking” process that goes on behind the scenes, with AI chatbots almost magically and effortlessly reacting to any query thrown at them.

The difficulty lies in tracing back the output to the original material the AI was trained on. OpenAI has been tight-lipped about the data it uses to train its AIs. A recent scientific report commissioned by the UK government concluded that AI developers “understand little about how their systems operate” and that scientific knowledge is “very limited.”

Other AI companies are exploring new ways to “open the black box” by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.

Read more: futurism.com