Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write a college term paper in Klingon or instantly create nine images of a slice of bread ascending to heaven.
But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth, often presenting inaccurate or plainly false information as facts. As generative AI becomes more widespread, it will undoubtedly change the way we live, in both good ways and bad.
“Large language models are actually special in their unreliability,” Marcus says on GZERO World, “They're arguably the most versatile AI technique that's ever been developed, but they're also the least reliable AI technique that's ever gone mainstream.”
Marcus sits down with Ian Bremmer to talk about the underlying technology behind generative AI, how it differs from the “good old-fashioned AI” of previous generations, and what effective, global AI regulation might look like.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- ChatGPT and the 2024 US election ›
- Can we trust AI to tell the truth? ›
- Ian interviews Scott Galloway: the ChatGPT revolution & tech peril ›
- Emotional AI: More harm than good? ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- Artificial intelligence: How soon will we see meaningful progress? - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- Top stories of 2023: GZERO World with Ian Bremmer - GZERO Media ›