Listen: Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On the GZERO World with Ian Bremmer podcast, cognitive scientist, author, and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write college papers or create Picasso-style paintings out of thin air. But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth. According to Marcus, they’re like “autocomplete on steroids.”
As generative AI tools become more widespread, they will undoubtedly change the way we live, in both good ways and bad.
Marcus sits down with Ian Bremmer to talk about the latest advances in generative artificial intelligence, the underlying technology, AI’s hallucination problem, and what effective, global AI regulation might look like.
- Can we trust AI to tell the truth? ›
- Ian interviews Scott Galloway: the ChatGPT revolution & tech peril ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Governing AI Before It’s Too Late ›
- Emotional AI: More harm than good? ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in - GZERO Media ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How is AI shaping culture in the art world? - GZERO Media ›
- AI is turbocharging the stock market, but is it all hype? - GZERO Media ›