GZERO AI
Tell me lies, tell me sweet little AIs
A Pinocchio puppet.
Photo by Jametlene Reskp on Unsplash
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.
With the global order under increasing strain, 2026 is shaping up to be a tipping point for geopolitics. From political upheaval in the United States to widening conflicts abroad, the risks facing governments, markets, and societies are converging faster—and more forcefully—than at any time in recent memory. To break it all down, journalist Julia Chatterley moderated a wide-ranging conversation with Ian Bremmer, president of Eurasia Group and GZERO Media, and a panel of Eurasia Group experts, to examine the findings of their newly-released annual Top Risks of 2026 report.
Some of the regime’s best moments — did we miss any? #PUPPETREGIME