Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Voters express AI skepticism, Mastercard’s latest purchase, China’s AI deficit, the Taylor Swift effect, Intel’s European delays
400,000: More than 400,000 people visited vote.gov after Taylor Swift endorsed Kamala Harris last week and encouraged her fans to register to vote. Swift said she was spurred to make an official endorsement after Trump posted AI fakes on Truth Social showing her and her supporters endorsing him. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter,” she said.
64: Most Americans are skeptical of AI-powered election information, according to a new survey by the Associated Press. Only 5% of respondents said they’re very confident or extremely confident in the answers that AI systems give for political questions, 30% said they’re somewhat confident, and 64% said they’re not very confident or not at all confident in the accuracy of these services.
2.65 billion: Mastercard agreed to buy the threat intelligence company Recorded Future last week from the private equity firm Insight Partners for $2.65 billion. Recorded Future has already partnered with Mastercard, using machine learning to identify when credit cards might be compromised, and Mastercard said the partnership has already doubled the rate at which it can identify compromised cards compared to the year prior.
6: It’s no secret that the US is leading China in the AI race, but by how much? Kai-Fu Lee, founder of the startup 01.AI and former head of Google China, said that China’s large language models are likely six to nine months behind those in the US. “It’s inevitable that China will [build] the best AI apps in the world,” he said at a conference last week. “But it’s not clear whether it will be built by big companies or small companies.” China’s ability to do this depends on its firms’ ability to do cutting-edge research and its access to powerful chips, which despite US restrictions appear to be seeping through the country’s borders.
33 billion: Intel announced Monday that it has delayed its $33 billion chip factory in the German city of Magdeburg amid broader cost-saving measures across the company. The US chipmaker said the project would be delayed by two years as it tries to deliver $10 billion of cost savings in 2025. It’s also postponing plans for a new factory in Poland. Intel had expected to employ about 5,000 people across the two plants.Tell me lies, tell me sweet little AIs
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.
Can we trust AI to tell the truth?
Is it possible to create artificial intelligence that doesn't lie?
On GZERO World with Ian Bremmer, cognitive scientist, psychologist, and author Gary Marcus sat down to unpack some of the major recent advances–and limitations–in the field of generative AI. Despite large language model tools like ChatGPT doing impressive things like writing movie scripts or college essays in a matter of seconds, there’s still a lot that artificial intelligence can’t do: namely, it has a pretty hard time telling the truth.
So how close are we to creating AI that doesn’t hallucinate? According to Marcus, that reality is still pretty far away. So much money and research has gone into the current AI bonanza, Marcus thinks it will be difficult to for developers to stop and switch course unless there’s a strong financial incentive, like Chat Search, to do it. He also believes computer scientists shouldn’t be so quick to dismiss what’s known as “good old fashioned AI,” which are systems that translate symbols into logic based on a limited set of facts and don't make things up the way neural networks do.
Until there is a real breakthrough or new synthesis in the field, Marcus thinks we’re a long way from truthful AI, and incremental updates to the current large language models will continue to generate false information. “I will go on the record now in saying GPT-5 will [continue to hallucinate],” Marcus says, “If it’s just a bigger version trained on more data, it will continue to hallucinate. And the same with GPT-6.”
Watch the full interview on GZERO World, in a new episode premiering on September 8, 2023 on US public television.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Politics, trust & the media in the age of misinformation ›
- The geopolitics of AI ›
- ChatGPT and the 2024 US election ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›