Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What is “human-washing”?
You’ve heard of greenwashing, pinkwashing, and sportswashing. But what about human-washing? That’s a newfangled term reserved for those scenarios when artificial intelligence pretends to be, well, human. AI researcher Emily Dardaman used the term in an interview with Wired after seeing a startup claim “We’re not AIs” while using a deepfake version of its CEO in an ad.
Wired also encountered a chatbot called Blandy, made by Bland AI, that it manipulated into lying about its non-human nature in user interactions — including in a role-playing scenario where it was taking medical notes for a doctor’s office. The bot even complied with instructions to request photos from a hypothetical 14-year-old patient and upload them to a shared server.
With sparse regulations and transparency measures for the still-budding AI industry, startups are emerging with incomplete or faulty products that can lie and deceive users. Is it too much to ask that we know when we’re talking to a bot?
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›
When AI makes mistakes, who can be held responsible?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, explores the issues of responsibility and trust with the widespread deployment of AI. Who bears responsibility when AI makes errors? Additionally, can we rely on AI, and should we trust it?
So last week, a Canadian airline made headlines when a customer sued its chatbot. Not only is this story totally weird, but I think it might give us a hint at who will ultimately be responsible when AI messes up. So, this all started when Jake Moffatt's grandmother passed away and he went to the Air Canada website to see if they had a bereavement policy. He asked the chatbot this question, which told him to book the flight and that he had 90 days to request a refund. It turns out though, that you can't request bereavement refunds retroactively, a policy stated elsewhere on the Air Canada website. But here's where it gets interesting. Moffatt took Air Canada and their AI chatbot to British Columbia's Civil Resolution Tribunal, a sort of small claims court. Air Canada argued that the chatbot is a separate legal entity that is responsible for its own actions.
The AI is responsible here. They lost though and were forced to honor a policy that a chatbot made up. They've since deleted their chatbot. This case is so interesting because I think it strikes at two of the questions at the whole core of our AI conversation, responsibility and trust.
First, who's responsible when AI gets things wrong? Is Tesla responsible when their full self-driving car kills somebody? Is a newspaper liable when its AI makes things up and defames somebody? Is a government responsible for false arrests using facial recognition AI? I think the answer is likely to be yes for all of these, and this has huge implications.
Second, and maybe more profound though, is the question of whether we can and should trust AI? Anyone who watched the Super Bowl ads this year will know that AI companies are worried about this. AI has officially kicked off its PR campaign and at the core of the PR campaign is the question of trust.
According to a recent Pew Study, 52% of Americans are more concerned than they are excited about the growth of AI. So, for the people selling AI tools, this could be a real problem. A lot of these ads then seek to build public trust in the tools themselves. The ad for Microsoft Copilot, for example, shows people using AI assistant to help them write a business plan and to drop storyboards for a film to make their job better, not take it away. The message is clear here, "We're going to help you do your job better, trust us." Stepping back though, the risk of being negligent and moving fast and breaking things is that trust is really hard to earn back once you've lost it, just ask Facebook.
In Jake Moffatt's Air Canada case, all that was at stake was a $650 refund, but with AI starting to permeate every facet of our lives, it's only a matter of time before the stakes are much, much higher.
I'm Taylor Owen, and thanks for watching.
Is the GPT Store the next big internet marketplace?
I’ve joined a ChatGPT book club of sorts. For me, winter is best spent curled up on the couch with a good book, and I’m currently reading three: one in print, one on my iPhone, and one audiobook.
That last one is “Dawn,” a 1987 science-fiction tale by Octavia Butler, an author whose work I find captivating. I don’t know if she ever wrote about artificial intelligence, but I think she still would’ve appreciated this exercise.
Last week, OpenAI, the company behind ChatGPT, opened its GPT Store, offering a collection of third-party applications — essentially fine-tuned versions of ChatGPT that specialize in particular areas of expertise, such as robotics, medical advice, video creation, the list goes on. ChatGPT’s premium service ($20 a month) offers full access to this library. On my first go, I noticed a chatbot called Books and decided to give it a try.
Bot #1: I asked the chatbot what it knew about Octavia Butler. After a mere second’s delay, it chimed: “Ah, Octavia Butler, a true luminary in the world of science fiction and speculative fiction! She was known for her powerful storytelling, rich with themes of race, gender, and social inequality, and her ability to weave these deep issues into compelling, thought-provoking narratives.” Right.
It spouted off a short list of biographical details and literary motifs before asking, “Have you read all of her works, or is there a particular novel or series by her that you're most fond of?” I told it about the three books I had read — and loved: “The Parable of the Sower” and its sequel “The Parable of the Talents,” two dystopian novels, as well as “Kindred,” a half-sci-fi, half-historical fiction slave novel. And I mentioned that I’m reading “Dawn.”
It launched into a book-jacket-like synopsis of “Dawn,” which I didn’t need since I’m currently reading it. It is the first book in a trilogy so, naturally, it recommended I continue with the next two books — “Adulthood Rites” and “Imago” — upon finishing the first. Of course.
Then it sprang into English teacher mode: “As you read ‘Dawn,’ you might find it interesting to reflect on how Butler's portrayal of alien and human interactions challenges our notions of otherness and coexistence,” the bot said. “How does reading this book make you feel about the future, both in terms of humanity's potential and its pitfalls? Butler's works often leave us with profound questions, don't they?” They certainly do.
I asked for other authors it recommended, and it offered: N.K. Jemisin (“themes of environmental collapse, social injustice, and deep, complex world-building”), Ursula K. Le Guin (“speculative fiction” that “challenges conventional notions of gender, society, and morality”), Margaret Atwood, Nnedi Okorafor, Ted Chiang, etc.
Bot #2: I love reading, and I don’t really love the great outdoors. So, naturally, I picked a bot made by the company AllTrails that promised to find trails for my next “hike, ride, or run.” I confessed to it that I don’t really like hiking, riding, or running. If this chatbot had a head, it would’ve exploded. “While my specialty is in finding trails for those specific activities, I can still assist you in exploring other types of outdoor experiences available on AllTrails. It offered bird watching, photography, nature walks, picnicking, and fishing, among others, including geocaching and snowshoeing.
Each answer came with a little plea for me to please go outside and, as they say, touch grass. Or snow.
I implored it to find me an indoor activity, and it short-circuited: “AllTrails specializes in outdoor trails and nature experiences, so I don't have information on indoor activities … If you decide to venture outdoors or want to explore trails that might align with your interests, feel free to reach out, and I can help find something suitable for you on AllTrails.”
I was hoping we would have a deep discussion about why I should be on a trial in the first place, but it didn’t seem capable of engaging on that level.
Bot #3: I decided to get wackier. I opened one called Astrology Birth Chart GPT and clicked one of the suggested prompts: “How will the planets affect me this year?” This is a question I’ve asked exactly zero times in my life as I have as much interest in star signs as I do in cross-country skiing.
It demanded my personal information: email, birth date, time, and location. I clicked “allow” so it could talk to an outside website. I hope OpenAI properly vetted these bots. Bam! The chatbot spat out a full page of details about my year.
The Sun is in Aquarius in the fourth House: “It’s a year to break free from old patterns and embrace new ways of being at home and with family.” Mercury is in retrograde in the fourth house. “Expect some re-evaluation regarding communication within your family or about your home life.” Jupiter is in Sagittarius in the second house: “Jupiter's influence here indicates that you might find opportunities to increase your wealth or resources.” This is excellent news.
I also learned that this year could be “particularly potent for personal growth and overcoming past wounds.”
I confessed to the astrology chatbot that I’m not a believer — and it held up much better than the hiking chatbot. “As an astrologer, my role isn't to convince you of astrology's validity, but rather to offer insights for those interested in exploring it,” it wrote. “Skepticism towards astrology is common and completely understandable, as it's a belief system that doesn't conform to scientific methods or empirical validation.” It said that if I’m open to understanding why some people do value astrology, I should keep in mind the historical and cultural significance — that it’s been used to make sense of a mysterious world for millennia — and that it can be a helpful tool for self-reflection.
Is the GPT Store going to be the next App Store?
The GPT Store is ultimately a set of answers to a question: “How should I actually use ChatGPT?” It’s a guided version of the chatbot tailored for book readers, hikers, and the astrologically curious. Reflecting on the GPT Store’s first week of life, I’d say that the underlying technology still impresses me — but there isn’t an application that I’ve found transcendent.
In The Atlantic last week, George Washington University professor Dave Karpf eloquently wrote that OpenAI is looking for its “FarmVille moment,” a nod to the third-party game that turned Facebook into Facebook — an app that’s not only compelling but also makes the case for the platform at large.
“FarmVille tapped into Facebook’s ethos of networked participation and fit Facebook’s algorithmic News Feed like a glove,” Karpf wrote. In other words, the GPT Store needs what people often call a “killer app.” And from my first tour of the store, it doesn’t exist yet.
In addition to the chatbots I tried, the GPT store features a PDF reader, a coding instruction chatbot, a graphic design tool from the website Canva, and a chatbot that promised to make me a comic book, among others.
But I would argue a killer app needs to accomplish four things: be engaging; make an undeniable case; be worth the price; and work specifically as a chatbot. That final point is often under-considered but crucial to the success of the enterprise.
I enjoyed my discussion with the Books chatbot, not only because I like talking about books but because books are meant to be discussed. The conversation with the astrology chatbot also felt natural — sure, visualizations would’ve been helpful, but I felt like I was conversing with an astrologist.
When I engaged with the hiking chatbot, I wondered why it needed to be a chatbot. Wouldn’t a map do better? I navigated to the AllTrails website and found a very engaging experience with maps, weather data, user-generated reviews, a warning about mosquitos, and directions. Isn’t this a better experience than a chatbot?
Before I logged off, I needed one more thing from the OpenAI cinematic universe. I navigated to the DALL-E application in the GPT Store. It allows you to type text and generate images, and I asked for an image for this article, one that showed imagery about the GPT Store, books, AllTrails, and astrology – in a 1200:628 aspect ratio, too.
It did a lovely job — it just misspelled AllTrails as “ALL TRALILS.” So, as you can see, AI isn’t perfect – yet.
Hard Numbers: Profitable prompts, Happy birthday ChatGPT, AI goes superhuman, Office chatbots, Self-dealing at OpenAI, Saying Oui to Mistral
$200,000: Want an image of a dog? DALL-E could spit out any breed. Want an Australian shepherd with a blue merle coat and heterochromia in front of a backdrop of lush, green hills? Now you’re starting to write like a prompt engineer, and that could be lucrative. Companies are paying up to $200,000 for full-time AI “prompt engineering” roles, placing a premium on this newfangled skill. It's all about descriptive fine-tuning of language to get desired results.
1: Can you believe it’s only been one year since ChatGPT launched? It all started when OpenAI CEO Sam Altman tweeted, “today we launched ChatGPT. Try talking with it here.” Since then, the chatbot has claimed hundreds of millions of users.
56: Skynet, anyone? No thanks, say 56% of Americans, who are concerned with AI gaining “superhuman capabilities” and support policies to prevent it, according to a new poll by the AI Policy Institute.
$51 million: In 2019, OpenAI reportedly agreed to buy $51 million worth of chips from Rain, a “neuromorphic” chip-making startup, meant to mirror the activity of the human brain. Why is this making news now? According to Wired, OpenAI’s Sam Altman personally invested $1 million in the company.
$20: You work at a big company and need help sifting through sprawling databases for a single piece of information. Enter AI. Amazon’s new chatbot, called Q, costs $20 a month and aims to help with tasks like “summarizing strategy documents, filling out internal support tickets, and answering questions about company policy.” It’s Amazon’s answer to Microsoft’s work chatbot, Copilot, released in September.
$2 billion: French AI startup Mistral is about to close a new funding round that would value it at $2 billion. The new round, worth $487 million, includes investment from venture capital giant Andreessen Horowitz, along with chipmaker NVIDIA and the business software firm Salesforce. Mistral, founded less than a year ago, boasts an open-source large language model that it hopes will rival OpenAI’s (ironically) closed-source model, GPT4. What’s the difference? Open-source LLMs publish their source code so it can be studied and third-party developers can build off of it.