Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's existential risks: Why Yoshua Bengio is warning the world
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
AI is turbocharging the stock market, but is it all hype?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, explores how artificial intelligence is turbocharging the stock market and transforming our economy. With AI driving the S&P 500 to new heights and drastically boosting NVIDIA's stock, researchers predict a future where we could be 1,000 times wealthier. However, Owen raises critical questions about whether this rapid growth is sustainable or simply a bubble ready to burst.
So whatever your lingering skepticism of this current moment of AI hype might be, one thing is undeniable: AI is turbocharging the stock market and the economy more broadly.
The S&P 500 hit an all-time high this year, largely driven by AI. NVIDIA's stock jumped 700% since the launch of ChatGPT, at one point becoming the most valuable company in the world. And some researchers think this is going to get even crazier. They argue that we could see 30% per capita economic growth by 2100 because of AI. What this means is that in 25 years of 30% per capita growth, we would be 1,000 times richer than we are now.
But what are these wild predictions based on? Really comes down till human labor being replaced by AI. These economists argue that AI could replace humans and that machines could do all sorts of things that humans can't or things that we can already do much better. Perhaps more importantly though, humans aren't constrained by the number of humans we have in the workforce. We can scale labor in a way unconstrained by human capacity. This fundamentally changes, they argue, the core dynamics of the economy.
But these are still predictions, and they're wild speculations, and they're often being promoted by the very same people who will benefit the most from the hype around AI. There's just no good evidence at this point that these things are necessarily going to come to fruition. And even if this wealth is generated, this 1,000 times richer than we are now wealth, there's no guarantee how it's going to be distributed, who will get it, who will benefit and who won't. It's pretty clear that wealth is likely to trickle up to those that own and control these technologies, as they have in the past. It's also clear that those that are most precarious in the workforce will be the most vulnerable and likely the most harmed. This is almost certainly, if we're talking about machines replacing humans, going to be women and minorities who are overrepresented in the service workforce.
Some argue that UBI could be a solution to this, that we should simply take this excess wealth and distribute it to all of us so that we don't have to work. But there's a real problem here. People find meaning in their work. I recently spoke to Rana Foroohar, a global economic reporter for the Financial Times, and she made this case really powerfully to me that we derive meaning from work, and if you take that away, there are going to be serious political repercussions. We've already started to see these. Because of all of this, Rana thinks we're in a bubble. She thinks the economy simply can't run this hot for this long. It would be historically unprecedented, she argues, for this to go on for very much longer. But also the narratives she argues about why this economic growth is going to happen simply are too tenuous to support the economic growth that's being built on it. And this for her is a clear sign that we're in a bubble. When you have a single narrative that doesn't allow for any contradictions, this clear narrative of a certainty of a path that is supporting a huge amount of economic activity, that is the sign of a bubble.
Finally, she argues that economic growth is simply too concentrated. Too few people are seeing the benefits of this at the moment. These are the six or seven tech companies who are responsible for the bulk of the value being generated around AI. This concentration is not broadly good for society. If a tech bubble collapses though, we are all on the hook for it. Like any bubble, we as a society, our pension funds, our investments, our retirements, the rest of the economy are being floated by this bubble, so we need to think really carefully about how and when this deflates.
How is AI shaping culture in the art world?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, recounts his conversation with media theorist Douglas Rushkoff about the cultural implications of the ongoing AI revolution, which raised a couple of questions: Will AI enhance cultural production, similar to Auto-Tune and Photoshop, or produce art that truly moves society. Will people even care about its role in cultural production? However, Owen notes that current AI-generated content often lacks the cultural depth that our art and culture demand.
So, I recently had a wonderful conversation with the media theorist Douglas Rushkoff about what this current moment in AI means for our culture.
For the past 30 years, Rushkoff has been chronicling the relationship between emerging technologies and the response of our cultural production. And in our conversation, he referenced a really wonderful Neil Postman observation. Neil Postman, the great media theorist who came up with the idea of "amusing ourselves to death." When Postman was asked to describe what media is, he said, "That media is a medium in which culture grows. It's the Petri dish in which we develop culture as a society." It's a wonderful metaphor, and that left me wondering: if a medium is the thing in which culture grows, what kind of culture is growing from AI?
Will this culture be more like Auto-Tune or Photoshop, so cultural production that's augmented by AI? And what kind of art will be built with AI? Made with AI? Will it be used to create the equivalent of art in a bathroom, as Rushkoff pointed out? Or to make real art that impacts us and moves us as a society? And how will we as citizens know the role that AI played in cultural production? Will we care? Will we want something like GMO or organic labels for a cultural production that leverages AI? Or will we demand AI-free spaces, as are starting to emerge, places online and in the physical world that are guaranteed to have not been touched by AI? And if we do know that art is driven by AI, created by AI in its entirety, will we even care?
And I'm very skeptical of this. I worry that we won't. And I think when I look at the world of a culture being created by AI, I see a dulling. My Twitter feed is flooded with AI-generated crap, and I'm just not seeing the whimsical, delightful, powerful, and important cultural content created by AI that we need as a society, that we demand of our art and culture.
I hope this changes. I really do. And I hope part of how we view the evolution of AI, in our society, should be from the lens of what kind of culture it is building. I'm Taylor Owen, and thanks for watching.
- How AI models are grabbing the world's data ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI is turbocharging the stock market, but is it all hype? - GZERO Media ›
- AI's evolving role in society - GZERO Media ›
- Can the UN get the world to agree on AI safety? - GZERO Media ›
How AI models are grabbing the world's data
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, examines the scale and implications of the historic data land grab happening in the AI sector. According to researcher Kate Crawford, AI is the largest superstructure ever built by humans, requiring immense human labor, natural resources, and staggering amounts of data. But how are tech giants like Meta and Google amassing this data?
So AI researcher Kate Crawford recently told me that she thinks that AI is the largest superstructure that our species has ever built. This is because of the enormous amount of human labor that goes into building AI, the physical infrastructure that's needed for the compute of these AI systems, the natural resources, the energy and the water that goes into this entire infrastructure. And of course, because of the insane amounts of data that is needed to build our frontier models. It's increasingly clear that we're in the middle of a historic land grab for these data, essentially for all of the data that has ever been created by humanity. So where is all this data coming from and how are these companies getting access to it? Well, first, they're clearly scraping the public internet. It's safe to say that if anything you've done has been posted to the internet in a public way, it's inside the training data of at least one of these models.
But it's also probably the case that these scraping includes a large amount of copyrighted data, or not publicly necessarily available data. They're probably also getting behind paywalls as we'll find out soon enough as the New York Times lawsuit against OpenAI works its way through the system and they're scraping each other's data. According to the New York Times, Google found out that OpenAI was scraping YouTube, but they didn't reveal it or push or reel it to the public because they too were scraping all of YouTube themselves and didn't just want this getting out. Second, all these companies are purchasing or licensing data. This includes news licensing entering into agreements with publishers, data purchased from data brokers, purchasing companies, or getting access to company datas that have rich data sets. Meta, for example, was considering buying the publisher Simon and Schuster just for access to their copyrighted books in order to train their LLM.
The companies that have access to rich data sets themselves are obviously an advantage here. And in particular, this is Meta and Google. Meta uses all the public data that's ever been inputted into their system. And it said that even if you aren't even on their products or use their product, your data could be in their systems, either from data they've purchased outside of their products, or if you've just appeared, for example, in an Instagram photo, your face is now being used to train their AI. Google has said that they use anything public that's on their platform. So an unrestricted Google Doc, for example, will end up in their training dataset. And they're also acquiring data in creative ways to say the least. Meta has trained its large language model on a dataset called book3, which contains over 170,000 pirated and copyrighted books. So where does this all leave us citizens and users of the internet?
Well, one thing's clear is that we can't opt out of this data collection and data use. Meta's opt out tool they provide is hidden and complicated to use, and it requires you to provide proof that our data has been used to train Meta's AI system before they'll consider removing it from their data sets. This is not the kind of user tools that we should expect in democratic societies. So it's pretty clear that we're going to need to do three things. One, we're going to need to scale up our journalism. This is exactly why we have investigative journalism, is to hold powerful governments and actors and corporations in our society to account. Journalism needs to dig deep into who's collecting what data, how these models are being trained, and how they're being built on data collected on our lives and our online experiences. Second, the lawsuits are going to need to work their way through the system and the discovery that comes with them should be revealing. The New York Times' lawsuit just to take one of the many against OpenAI, will surely reveal whether paywall journalism sits within the training models of these AI systems. And finally, there is absolutely no doubt that we need regulation to provide transparency and accountability of the data collection that is driving AI.
Meta recently announced, for example, that they were going to use data they'd collected on EU citizens in training their LLM. Immediately after the Irish Data Protection Commission pushed back, they announced they were going to pause this activity. This is why we need regulations. People who live in countries or jurisdictions that have strong data protection regulations and AI transparency regimes will ultimately be better protected. I'm Taylor Owen and thanks for watching.
- Electric Company and Water Works ›
- America’s first data security executive order ... underwhelms ›
- Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM ›
- Can data and AI save lives and make the world safer? ›
- How is AI shaping culture in the art world? - GZERO Media ›
Can AI help doctors act more human?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, explores the rather surprising role artificial intelligence could play in the healthcare industry's efforts to reconnect with humanity. Doctors have become busier and are spending less time with their patients, but AI has been touted as a solution to enable them to foster more human connections.
So, if there's one sector of our economy and our society that could use some real transformation, it's our healthcare system. No matter where you live around the world, no matter what your healthcare financial model is, it is almost certainly letting you down in some way. And at the core of this is the relationship between a doctor and a patient.
As doctors have become busier and they've been tasked with more and more responsibilities, they are spending less time with us, their patients. In the US, the average doctor's appointment is only 7 minutes long. In South Korea, it's 2 minutes. And in the US, one of the consequences of this is that there are 12 million significant misdiagnoses each year, 800,000 of which result in death or disability.
Cardiologist, medical researcher, and author, Eric Topol, says, "Medicine has become inhuman." Paradoxically, though, Topol thinks AI could make it more human. In its most basic form, this means bringing AI into the patient-doctor conversation. This could mean AI transcribing our conversations and allowing a doctor to pay attention to us, rather than typing on a computer screen. It also opens up the range of capacities that they could be assisted with by AI. This could be making our future appointments, following up on our treatment plans, or perhaps more powerfully, helping with diagnoses itself. A doctor has very limited views into our current condition. And AI might have far greater visibility. Just take radiology scans. Topol says an AI could add superhuman eyes to the doctor's capacity. When a radiology scan is ordered, the radiologist is typically told to look for one specific thing, but an AI could look for everything, and would have access to potentially rich and detailed views of our health history. Retina scans are another example. An AI can detect everything from diabetes, to kidney disease, to potentially Alzheimer's, just by looking in our eyes.
Another powerful potential here is in forecasting the future. The healthcare profession is not just about diagnosing our current conditions but should be about helping protect us from potential future conditions. And an AI can help process reams of data about our body, our health history, our family history, our genetics, and potentially predict what we are most susceptible to in the future. So, could we potentially use AI 20 years before someone evolves into a condition, such as Alzheimer's, and to help develop treatment and medical and lifestyle adjustment plans accordingly? The potential really is there.
And one thing seems overwhelmingly clear, that this is going to utterly transform what it means to be a doctor. Doctors will not have to memorize things and repeat by rote conditions from a textbook, as we currently train them. But instead, we might screen doctors for their human relationships, for their emotional intelligence, and for their empathy. As Topol says, this might ultimately mean a shift in the system from curing to healing.
I'm Taylor Owen, and thanks for watching.
How neurotech could enhance our brains using AI
So earlier this year, Elon Musk's other company, Neuralink, successfully installed a brain implant in a 29-year-old quadriplegic man's brain named Noland Arbaugh. Last week, this story got a ton of attention when Neuralink announced that part of this implant had malfunctioned. But I think this news cycle and all the hyperbole around it misses the bigger picture.
Let me explain. So first, this Neuralink technology is really remarkable. It allowed Arbaugh to play chess with his mind, which he showed in his demo. But the potential beyond this really is fast. It's pretty early days for this technology, but there are signs that it might help us eliminate painful memories, repair lost bodily functions, and allow us to maybe even communicate with each other telepathically.
Second, this brain implant neurotechnology is part of a wider range of neuro tech. A second category isn't implanted in your body, but instead it sits on your body or near it, and picks up electric signals of your brain. These devices, which are being developed, by Meta and Apple, among many others, are more akin to health tracking devices, except they open up access to our thoughts.
Third point here is that this tech is an example of an adjacent technology to AI being turbocharged by recent advances in AI. One of the challenges of neurotech has been how to make sense of all of this data coming from our brains. And here is where AI becomes really powerful. We now increasingly have the power to give these data from our minds, meaning. The result is that the technology and corporations developing them have access to the most private data we have, data about what we think. Which of course brings up the bigger point here, which is we're on the cusp of getting access to our brain data and the potential and of abuse for this really is vast. And it's already happening.
Governments are already using neurotech to try and read their citizens minds, and corporations are working on ways to advertise to potential customers in their dreams. And finally, I think this shows very clearly that we need to be thinking about regulation and fast. Nita Farahany, who has recently written a book about the future of neurotechnology, called, “The Battle for Your Brain: Defending the right to Think Freely in the Age of Neurotechnology,” thinks we have a year to figure out the governance of this tech. A year, it's moving that fast. So many in the AI debate are debating and discussing the existential risks of AI, we might want to pay some attention to the technologies that are adjacent to AI and being empowered by it, as they likely present a far more immediate challenge.
I'm Taylor Owen, and thanks for watching.
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›
Israel's Lavender: What could go wrong when AI is used in military operations?
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.