Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity
In a wide-ranging conversation with Ian Bremmer, filmed live at the historic 92nd Street Y in NYC, bestselling author Yuval Noah Harari delves deep into the profound shifts AI is creating in geopolitical power dynamics, narrative control, and the future of humanity.
Highlighting AI's unparalleled capacity to make autonomous decisions and generate original content, Harari underscores the rapid pace at which humans are ceding control over both power and stories to machines. "AI is the first technology in history that can take power away from us,” Harari tells Bremmer.
The discussion also touches on AI's impact on democracy and personal relationships, with Harari emphasizing AI's infiltration into our conversations and its burgeoning ability to simulate intimacy. This, he warns, could "destroy trust between people and destroy the ability to have a conversation," thereby unraveling the fabric of democracy itself. Harari chillingly refers to this potential outcome as "a social weapon of mass destruction." And it’s scaring dictators as much as democratic leaders. “Dictators,” Harari reminds us, “they have problems too.”
Harari's insights into AI's impact on democracy, intimacy, and social cohesion offer a stark vision of the challenges and transformations lying ahead. "The most sophisticated information technology in history, and people can no longer talk with each other?"
Watch full episode: Yuval Noah Harari explains why the world isn't fair (but could be)
Catch GZERO World with Ian Bremmer every week online and on US public television. Check local listings.
- Everybody wants to regulate AI ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- Steven Pinker shares his "relentless optimism" about human progress ›
- From CRISPR to cloning: The science of new humans ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
- How is AI shaping culture in the art world? - GZERO Media ›
- AI is turbocharging the stock market, but is it all hype? - GZERO Media ›
- Can the UN get the world to agree on AI safety? - GZERO Media ›
AI & human rights: Bridging a huge divide
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.
And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.
And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.
- Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- New AI toys spark privacy concerns for kids ›
- Emotional AI: More harm than good? ›
- Singapore sets an example on AI governance ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Making rules for AI … before it’s too late ›