Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›
Can we trust AI to tell the truth?
Is it possible to create artificial intelligence that doesn't lie?
On GZERO World with Ian Bremmer, cognitive scientist, psychologist, and author Gary Marcus sat down to unpack some of the major recent advances–and limitations–in the field of generative AI. Despite large language model tools like ChatGPT doing impressive things like writing movie scripts or college essays in a matter of seconds, there’s still a lot that artificial intelligence can’t do: namely, it has a pretty hard time telling the truth.
So how close are we to creating AI that doesn’t hallucinate? According to Marcus, that reality is still pretty far away. So much money and research has gone into the current AI bonanza, Marcus thinks it will be difficult to for developers to stop and switch course unless there’s a strong financial incentive, like Chat Search, to do it. He also believes computer scientists shouldn’t be so quick to dismiss what’s known as “good old fashioned AI,” which are systems that translate symbols into logic based on a limited set of facts and don't make things up the way neural networks do.
Until there is a real breakthrough or new synthesis in the field, Marcus thinks we’re a long way from truthful AI, and incremental updates to the current large language models will continue to generate false information. “I will go on the record now in saying GPT-5 will [continue to hallucinate],” Marcus says, “If it’s just a bigger version trained on more data, it will continue to hallucinate. And the same with GPT-6.”
Watch the full interview on GZERO World, in a new episode premiering on September 8, 2023 on US public television.
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Politics, trust & the media in the age of misinformation ›
- The geopolitics of AI ›
- ChatGPT and the 2024 US election ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›