Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/
Scott Nover
Contributing Writer
Scott Nover is the lead writer for GZERO AI. He's a contributing writer for Slate and was previously a staff writer at Quartz and Adweek. His writing has appeared in The Atlantic, Fast Company, Vox.com, and The Washington Post, among other outlets. He currently lives near Washington, DC, with his wife and pup.
Jul 09, 2024
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.