GZERO AI
Tell me lies, tell me sweet little AIs
A Pinocchio puppet.
Photo by Jametlene Reskp on Unsplash
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.
Global conflict was at a record high in 2025, will 2026 be more peaceful? Ian Bremmer talks with CNN’s Clarissa Ward and Comfort Ero of the International Crisis Group on the GZERO World Podcast.
Think you know what's going on around the world? Here's your chance to prove it.
Indian Prime Minister Narendra Modi isn’t necessarily known as the greatest friend of Muslim people, yet his own government is now seeking to build bridges with Afghanistan’s Islamist leaders, the Taliban.
The European Union just pulled off something that, a year ago, seemed politically impossible: it froze $247 billion in Russian central bank assets indefinitely, stripping the Kremlin of one of its most reliable pressure points.