Hollywood has long warned us about a future where humans and machines become indistinguishable, and we might be closer than we think. OpenAI's Dall-E-2 can create images from text prompts, like astronauts riding horses in space. And their ChatGPT language model generates human-like text, blurring the lines between sci-fi and reality. By 2023, AI might even pass the Turing test, which for decades has measured a machine's human intelligence.
While generative AI has the power to solve major global challenges, it also presents dangers, Ian Bremmer explains on GZERO World.
Authoritarian governments can use it to increase surveillance and spread misinformation. In democracies, AI can create and spread large volumes of misinformation that make it difficult to distinguish fact from fiction.
We're at a critical juncture. How will generative AI change our lives? Will the ultimate movie be a rom-com or a horror film?
Watch the GZERO World episode: The AI arms race begins: Scott Galloway’s optimism & warnings
- Can we control AI before it controls us? ›
- Artificial intelligence from Ancient Greece to 2021 ›
- Be more worried about artificial intelligence ›
- The transformative potential of artificial intelligence ›
- Emotional AI: More harm than good? - GZERO Media ›
- GZERO World with Ian Bremmer: Season 6 preview - GZERO Media ›