Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.

An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.

Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.

More For You

At this year's World Economic Forum in Davos, Switzerland, our Global Stage panel discussion will examine the growing infrastructure around AI, how countries are tackling AI adoption, and the ways in which local and supranational industries might benefit from this rapidly accelerating technology. Watch the live premiere on Wednesday, January 21st at 12PM ET/6 PM CEST at gzeromedia.com/globalstage