Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the rise of AI agents.
Today I want to talk about a recent big step towards the world of AI agents. Last week, OpenAI, the company behind ChatGPT, announced that users can now create their own personal chatbots. Prior to this, tools like ChatGPT were primarily useful because they could answer users' questions, but now they can actually perform tasks. They can do things instead of just talking about them. I think this really matters for a few reasons. First, AI agents are clearly going to make some things in our life easier. They're going to help us book travel, make restaurant reservations, manage our schedules. They might even help us negotiate a raise with our boss. But the bigger news here is that private corporations are now able to train their own chatbots on their own data. So a medical company, for example, could use personal health records to create virtual health assistants that could answer patient inquiries, schedule appointments or even triage patients.
Second, this I think, could have a real effect on labor markets. We've been talking about this for years, that AI was going to disrupt labor, but it might actually be the case soon. If you have a triage chatbot for example, you might not need a big triage center, and therefore you'd need less nurses and you'd need less medical staff. But having AI in the workplace could also lead to fruitful collaboration. AI is becoming better than humans at breast cancer screening, for example, but humans will still be a real asset when it comes to making high stakes life or death decisions or delivering bad news. The key point here is that there's a difference between technology that replaces human labor and technology that supplements it. We're at the very early stages of figuring out the balance.
And third, AI Safety researchers are worried about these new kinds of chatbots. Earlier this year, the Center for AI Safety listed autonomous agents as one of its catastrophic AI risks. Imagine a chatbot being programmed with incorrect medical data, triaging patients in the wrong order. This could quite literally be a matter of life or death. These new agents are clear demonstration of the disconnect that's increasingly growing between the pace of AI development, the speed with which new tools are being developed and let loose on society, and the pace of AI regulation to mitigate the potential risks. At some point, this disconnect could just catch up with us. The bottom line though is that AI agents are here. As a society, we better start preparing for what this might mean.
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Governing AI Before It’s Too Late ›
- How AI will roil politics even if it creates more jobs ›
- Everybody wants to regulate AI ›
- The OpenAI-Sam Altman drama: Why should you care? - GZERO Media ›
- CRISPR, AI, and cloning could transform the human race - GZERO Media ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- ChatGPT on campus: How are universities handling generative AI? - GZERO Media ›