Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.

An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.

Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.

More For You

- YouTube

At the 2026 World Bank/IMF Spring Meetings, former Egyptian Minister of Planning, Economic Development & International Cooperation Rania Al-Mashat speaks with GZERO’s Tony Maciulis about a global economy increasingly shaped by geopolitical fragmentation and rising uncertainty.

- YouTube

In Iran, a shooting war has given way to a fragile ceasefire and a high-stakes standoff in the Strait of Hormuz, with the global economy hanging in the balance. Iran now holds effective control over a critical oil chokepoint, says Eurasia Group energy analyst Gregory Brew, while the US enforces its own blockade to try to squeeze Iran.