Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Tell me lies, tell me sweet little AIs
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.