Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Regulating AI: The urgent need for global safeguards
There’s been a lot of excitement about the power and potential of new generative artificial intelligence tools like ChatGPT or Midjourney. But there’s also a lot to be worried about, like misinformation, data privacy, and algorithm bias, just to name a few.
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus lays out the case for effective, comprehensive, global regulation when it comes to artificial intelligence.
Because of how fast the technology is developing and its potential impact on everything from elections to the economy, Marcus believes that every nation should have its own AI agency or cabinet-level position. He also believes that global AI governance is crucial, so that AI safety standards are the same from country to country.
“We need to move to something like the FDA model,” Marcus tells Bremmer on GZERO World, “If you’re going to do something that you deploy on a wide scale, you have to make a safety case.”
Watch the GZERO World episode: Is AI's "intelligence" an illusion?
And watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI comes to Capitol Hill ›
- Is AI's "intelligence" an illusion? ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›
Artificial intelligence: How soon will we see meaningful progress?
The field of artificial intelligence has exploded in the last year. Generative AI tools like ChatGPT are now used by hundreds of millions of people around the world for everything from writing college term papers to computer code.
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus discussed AI’s exponential growth and where the biggest advancements might be in the next few years. One word stands out: uncertainty.
Massive amounts of money have been pumped into AI research and development, but Marcus warns that just because investors are excited, doesn’t mean we’ll see meaningful progress. He cites the example of driverless cars, a field that’s received over $100 billion in investment, but hasn’t yet delivered on its initial promise.
“Large language models are actually special in their unreliability,” Marcus tells Bremmer, “They're arguably the most versatile AI technique that's ever been developed, but they're also the least reliable AI technique that's ever gone mainstream.”
Marcus says that even with the emergence of more advanced models like ChatGPT-5, the reliability of AI to give accurate information in critical areas, like medicine, is still a distant reality. For the near future, at least, generative AI tools will need “humans in the loop” will remain essential for almost all of the uses of AI we’re really benefiting from.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Be more worried about artificial intelligence ›
- How should artificial intelligence be governed? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- Is AI's "intelligence" an illusion? ›
- Personal data risks with TikTok; Tesla driverless cars investigation ›
- AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in - GZERO Media ›
Is AI's "intelligence" an illusion?
Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write a college term paper in Klingon or instantly create nine images of a slice of bread ascending to heaven.
But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth, often presenting inaccurate or plainly false information as facts. As generative AI becomes more widespread, it will undoubtedly change the way we live, in both good ways and bad.
“Large language models are actually special in their unreliability,” Marcus says on GZERO World, “They're arguably the most versatile AI technique that's ever been developed, but they're also the least reliable AI technique that's ever gone mainstream.”
Marcus sits down with Ian Bremmer to talk about the underlying technology behind generative AI, how it differs from the “good old-fashioned AI” of previous generations, and what effective, global AI regulation might look like.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- ChatGPT and the 2024 US election ›
- Can we trust AI to tell the truth? ›
- Ian interviews Scott Galloway: the ChatGPT revolution & tech peril ›
- Emotional AI: More harm than good? ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- Artificial intelligence: How soon will we see meaningful progress? - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- Top stories of 2023: GZERO World with Ian Bremmer - GZERO Media ›
Podcast: Getting to know generative AI with Gary Marcus
Listen:Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On the GZERO World with Ian Bremmer podcast, cognitive scientist, author, and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write college papers or create Picasso-style paintings out of thin air. But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth. According to Marcus, they’re like “autocomplete on steroids.”
As generative AI tools become more widespread, they will undoubtedly change the way we live, in both good ways and bad.
Marcus sits down with Ian Bremmer to talk about the latest advances in generative artificial intelligence, the underlying technology, AI’s hallucination problem, and what effective, global AI regulation might look like.
- Can we trust AI to tell the truth? ›
- Ian interviews Scott Galloway: the ChatGPT revolution & tech peril ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Governing AI Before It’s Too Late ›
- Emotional AI: More harm than good? ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in - GZERO Media ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How is AI shaping culture in the art world? - GZERO Media ›
- AI is turbocharging the stock market, but is it all hype? - GZERO Media ›
Can we trust AI to tell the truth?
Is it possible to create artificial intelligence that doesn't lie?
On GZERO World with Ian Bremmer, cognitive scientist, psychologist, and author Gary Marcus sat down to unpack some of the major recent advances–and limitations–in the field of generative AI. Despite large language model tools like ChatGPT doing impressive things like writing movie scripts or college essays in a matter of seconds, there’s still a lot that artificial intelligence can’t do: namely, it has a pretty hard time telling the truth.
So how close are we to creating AI that doesn’t hallucinate? According to Marcus, that reality is still pretty far away. So much money and research has gone into the current AI bonanza, Marcus thinks it will be difficult to for developers to stop and switch course unless there’s a strong financial incentive, like Chat Search, to do it. He also believes computer scientists shouldn’t be so quick to dismiss what’s known as “good old fashioned AI,” which are systems that translate symbols into logic based on a limited set of facts and don't make things up the way neural networks do.
Until there is a real breakthrough or new synthesis in the field, Marcus thinks we’re a long way from truthful AI, and incremental updates to the current large language models will continue to generate false information. “I will go on the record now in saying GPT-5 will [continue to hallucinate],” Marcus says, “If it’s just a bigger version trained on more data, it will continue to hallucinate. And the same with GPT-6.”
Watch the full interview on GZERO World, in a new episode premiering on September 8, 2023 on US public television.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Politics, trust & the media in the age of misinformation ›
- The geopolitics of AI ›
- ChatGPT and the 2024 US election ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›