Google DeepMind’s Superhuman AI Excels in Fact-Checking

Google DeepMind has unveiled a new AI system, Search-Augmented Factuality Evaluator (SAFE), that excels in fact-checking. SAFE uses a large language model to break down generated text into individual facts and then uses Google Search results to determine the accuracy of each claim.

In a dataset of roughly 16,000 facts, SAFE’s assessments matched human ratings 72% of the time. In a sample of 100 disagreements between SAFE and the human raters, SAFE’s judgment was found to be correct in 76% of cases. However, some experts question what “superhuman” really means in this context, suggesting that in this case, “superhuman” may simply mean “better than an underpaid crowd worker, rather a true human fact checker.”

read more > VentureBeat