Apple’s Local AI Model Outperforms GPT-4, Researchers Claim

Apple’s AI researchers have developed a local AI model, known as ReALM (Reference Resolution As Language Modeling), that they claim substantially outperforms GPT-4, the technology behind ChatGPT, Google’s Gemini, and Microsoft’s Copilot. This is largely due to the model’s ability to consider a variety of conversational variables as it processes user requests, allowing for a more natural and productive experience.

These variables include on-screen entities, conversational entities, and background entities. On-screen entities consider what’s on the user’s screen, a capability likely made possible by Apple’s model being local, not web-based. Conversational entities incorporate details provided by the user throughout the conversation, even if it was several turns ago. Meanwhile, background entities wrap in processes occurring out of sight, such as an alarm or music playing in the background.

Incorporating these entities into a small AI model reportedly resulted in performance comparable to that of GPT-4, while wrapping them into a large model allowed Apple to substantially outperform the same competitor. Apple has set aside more than a billion dollars to catch up with its rivals in the AI race. Analysts have speculated that Apple will introduce an AI-equipped Siri upgrade along with iOS 18 later this year.

read more > www.extremetech.com

NIMBUS27