Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.

An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.

Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.

More For You

Anatomy of a Scam

Behind every scam lies a story — and within every story, a critical lesson. Anatomy of a Scam, takes you inside the world of modern fraud — from investment schemes to impersonation and romance scams. You'll meet the investigators tracking down bad actors and learn about the innovative work being done across the payments ecosystem to protect consumers and businesses alike. Watch the first episode of Mastercard's five-part documentary, 'Anatomy of a Scam,' here.

Geoffrey Hinton, the ‘Godfather of AI,’ joins Ian Bremmer on the GZERO World podcast to talk about how the technology he helped build could transform our lives… and also threaten our very survival.

- YouTube

Is the AI jobs apocalypse upon us? On Ian Explains, Ian Bremmer breaks down the confusing indicators in today’s labor market and how both efficiency gains as well as displacement from AI will affect the global workforce.