TRANSCRIPT: AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in
Azeem Azhar:
One of the dangers of last year was that people started to lose their faith in technology. And technology is what does provide prosperity, and we need to have more grown-up conversations, more civil conversations, more moderate conversations about what that reality is.
Ian Bremmer:
Hello and welcome to the GZERO World Podcast. This is where you'll find extended versions of my conversations on public television. I'm Ian Bremmer, and today we are talking about artificial intelligence and the future of work.
Now, personally, I've never been more excited about a new technology and the opportunities that it will very, very quickly bring. But artificial intelligence is also a risk because the change that it is bringing is coming real, real fast. Generative AI tools like ChatGPT and Midjourney will massively increase productivity, reduce waste, allow us to measure things everywhere in real time. But there are also reasonable fears of job replacement and unequal access to that technology also coming really fast. Human capital has been the powerhouse of economic growth for most of history. But the unprecedented pace of advances in artificial intelligence is stirring up excitement and deep anxiety about not only how we will work, but if we're going to work at all.
Will AI be the productivity booster CEOs hope for, or the job killers that employees fear? You know it's both. Will the idea of work itself look radically different in 10 or even three years? To find out, I'm talking to two very smart humans, at least for now, about the hype versus reality of this AI moment and how we will create a workforce that can thrive in the world of whatever comes next. First, I'm sitting down with Azeem Azhar, AI expert, writer, and co-chair of the World Economic Forum's Global Futures Council on Complex Risks. And here's our conversation.
Speaker 3:
The GZERO World Podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate, and the only end-to-end solutions platform addressing the critical initiatives of global logistics today. Learn more at prologis.com.
This podcast is also brought to you by the feature film One Life. One Life tells the incredible true story of Nicholas "Nicky" Winton, a young man who helped rescue hundreds of predominantly Jewish children from Czechoslovakia in a race against time before Nazi occupation closed the borders on the verge of World War II. 50 years later, Nicky, played by Sir Anthony Hopkins, is haunted by the fate of those he wasn't able to bring to safety. Also starring Helena Bonham Carter and Jonathan Pryce, Variety calls One life "stirring, a testament to the power of good." And the Daily Beast says Hopkins "gives a stunning performance." Only in theaters March 15th.
Ian Bremmer:
Azeem Azhar, thanks so much for joining today.
Azeem Azhar:
I'm so happy to be here.
Ian Bremmer:
So we're going to have an AI conversation, but that's all we've had is AI conversations, right? You walk around Davos this week and it's as if everyone is an AI expert.
Azeem Azhar:
There is so much AI in the promenade. There's very little crypto, so AI has taken all of the crypto and the blockchain stuff, and each company has come out with some form of enterprise AI tool. It's where we are.
Ian Bremmer:
What's the most ridiculous expression of AI that has manifested itself in front of you this week?
Azeem Azhar:
Oh, this week. That's a good one. I think the thing that has surprised me the most has been... Actually, where I felt ridiculous was how well CEOs were articulating generative AI. So the biggest manifestation was my shock to be in front of bosses with 50, 100, 200,000 employees and hear them talk with a level of detail about this technology that's only been public for a year or so. I've never experienced that in my life, and I had to go off and check my own assumptions and say, how did I not realize how quickly they've moved?
Ian Bremmer:
And how quickly the technology has moved, of course. But one of the things that has startled me is that the top experts in the field 10 years ago thought it would take decades. Even five years ago, thought it would take decades to get to where we are today, just with GPT-4.
Azeem Azhar:
Yeah, it's really, really shocking. What happened? What happened was that we figured out this new architecture called the Transformer a few years ago, developed in Google. They didn't pick up on it, and that team disappeared off to other places. In fact, even OpenAI didn't pick up on it. And then they did. And the real surprise was when ChatGPT came out, we could just talk to it as humans and it would talk back to us as humans. And that has created this cascade of innovation and investment.
And last year, 2023, the rate of progression was quite staggering. And when we think about what GPT-5 will do, I think we'll be surprised again. It will be better at every way compared to GPT-4, and that will be another moment of awakening,
Ian Bremmer:
And it will be increasingly training on our own individual data. So your experience with your AI will be different than my experience with my AI as opposed to right now when we're all getting the same answers when we ask the same questions.
Azeem Azhar:
That's right. And that could sound great, but I do worry about what that actually means. For me as a consumer to use something, sometimes I just do want to get great recommendation for a hairdresser, not one that's really tailored to me. And of course, more pertinently, it starts to ask about the information ecosystem and what do we experience? Because if everything is filtered by these AIs, number one, we don't control them. They're controlled by big companies. And secondly, that filtering is going to present you with completely different things to the things that I see. We saw that play out a little bit with social media, of course, but it could be accelerated with this world of AI.
Ian Bremmer:
Now, when the top experts were so wrong about how fast this could explode, it does sort of lend the question of what else they're telling us now they might be so incredibly wrong about.
Azeem Azhar:
I'll give you one thing that I think some of the top experts have got wrong. I think they worried too much about the existential risk problem that dominated the narrative around AI in 2023, it dominated the AI Safety Summit that was hosted in the UK in November.
Ian Bremmer:
The Bletchley Summit, yeah.
Azeem Azhar:
The Bletchley Summit. And you start to see people walk back those comments and those remarks. And what's been fantastic at the annual meeting of the World Economic Forum has been to see grown-up conversations take place between the top AI experts. We haven't heard so much about existential risk. We've heard about practical things relating to copyright, to regulation, to the workforce, to employment, the things that really matter.
Ian Bremmer:
So the boring applications that actually move the economy, that actually change how we live our lives.
Azeem Azhar:
Change how people live their lives. And I think also start to rebuild trust. One of the dangers of last year was that people started to lose their faith in technology. Ordinary voters would start to dislike this path of what is quite a helpful technology, and technology is what does provide prosperity. And we need to have more grown-up conversations, more civil conversations, more moderate conversations about what that reality is.
Ian Bremmer:
Now, there were, you're right, a lot of experts that came out. Fathers of the field that said, this could be the end of humanity within a matter of decades. Our lifetimes, essentially. And were calling for pauses for, for example, the sorts of speed of these foundational models. Why would you say they've backed away from it? Because it's not like they know what the trajectory is going to be in five or 10 years today, any better than they did a year ago really?
Azeem Azhar:
Yeah. And I don't think all of them have backed away from it, but you do start to see that the wording has shifted. They're paying attention to other types of harms. I suspect they've started to back away from it because it requires so many fantastical assumptions to get to AI taking over the world. And as they've started to speak to people who understand the world in other ways, political scientists, sociologists, historians, business leaders, they've started to realize that perhaps those paths of improbability are even more improbable.
That's my hypothesis. I think we'll need to talk to some of them directly to say what's helped you change your opinion, but I suspect it's that the contact they've had with real people who live in the real world.
Ian Bremmer:
Now, of course, when you come to a meeting like this, you are in front of a lot of very smart, very powerful people who are very, very invested in you believing their story. And so they are selling a business model, they're selling their company, they're selling anything they need you to do. That does not necessarily drive trust. So what are the things that you are hearing that are aligned with interests of technologists that you don't necessarily buy?
Azeem Azhar:
Well, I think one of the biggest is this discussion of closed proprietary systems compared to open source systems, which anyone can download.
Ian Bremmer:
Which is a civil war right now.
Azeem Azhar:
That is a civil war. With Meta on the open source side. Meta has pulled off an incredible repositioning exercise from being really the bete noire of the big tech firms to the one that we all love, and we're so grateful to Yann LeCun, who's Meta's chief scientist standing up for open source. But that is a civil war. And you have, on the other hand, someone like Demis Hassabis who believes it's far too risky for these models, the founder of DeepMind.
There's another moment of self-serving, which is how will these AI models actually find their way into companies? Someone like OpenAI will say, "You do this through what's known as an API." In other words, OpenAI runs the engine for you, and every time you need a request, you send it to OpenAI and you get charged. And there are others in the industry who are saying, "Well, no, we don't want to do that." And I think companies are saying that. They're saying, "It's our model, it's our data, it's our business. We want to run this ourselves." So I think there is a schism emerging there, and there will be a battle. And the challenge I think we face as OpenAI is so far ahead, its technology is the best, GPT-5 will probably set a new benchmark. They are supported by Microsoft. So there's a lot of momentum behind their approach.
Ian Bremmer:
So in other words, the closed model you think right now is winning?
Azeem Azhar:
Well, I think the closed model will provide the better performance for companies and for customers, so we'll drift towards it. But I actually believe it, Ian, that we will see a society of AI. We will see hundreds and thousands of different types of models with different capabilities being used by consumers and businesses and governments in much the same way that we have millions of people with different capabilities who interact with each other in our economies.
Ian Bremmer:
But we have a few different smartphones that, again, transformative technology that everybody interacts with and through. So it's disintermediating. It's very important. And on the one hand you have quite expensive Apple devices that are comparatively secure in terms of your individual data. And then you have other devices, a lot of them that are less expensive, but are very much we're going to sell your data, we're going to use your data. Are you saying that that is not likely to be the direction that AI is heading?
Azeem Azhar:
No, I think AI will have many, many more types of systems that are out there, unless OpenAI becomes so much better than anyone else and so we all flock to it. I suspect it will be a very mixed economy of different business models, different capabilities, different trade-offs. And in particular, consumers won't interact with AI systems in quite the way we do today, where we're really close to the metal. We'll interact with them because there are component in an app that we actually care about. And so we'll be divorced from that, and perhaps we don't even think there's AI in there. We've just paid our $7 a month to access the app.
Ian Bremmer:
Now, usually I ask about concepts and ideas and not so much about people, but at this Davos, Sam Altman is the person who is driving. He's the highest ticket. Everyone wants to see. What do we make of him? There's a lot of news around he's there, he's fired, he's back. I don't want you to talk about that. I want to talk about what he means and represents. Why is it that he has become the poster child for AI?
Azeem Azhar:
Well, he is so polished in his presentation. I like to think of him as someone who has been fine-tuned to absolute perfection. Now, fine-tuning is what you do to an AI model to get it to go from spewing out garbage to being as wonderful as ChatGPT is. And I think that Sam has sort of gone in through the same process because he is really polished. It's really hard for great interviewers even like you to get a chink in the armor and get in there. And I think he tells very good stories.
He's got one communication technique, which I think is brilliant, and I hope to learn from myself. Sam gives you the prize at the beginning of his response, and then he explains the build-up to it. And it's really clever because you know what you're going to get. And then the explanation comes. I do it the other way around. It's not as good. I will improve for the next time we have a conversation.
Ian Bremmer:
You will?
Azeem Azhar:
Yeah.
Ian Bremmer:
Are you actively working on it?
Azeem Azhar:
I'm going to try.
Ian Bremmer:
You're trying to Altmanize yourself?
Azeem Azhar:
Well, not Altmanize myself, but if I see someone doing something better than me, I'm going to learn.
Ian Bremmer:
That's good. There you go. But does he hallucinate when he chats?
Azeem Azhar:
He doesn't hallucinate. He is so measured. I interviewed him in May 2023 and I interviewed him a few years before, and the gap between the two was really significant. Just the polish and the refinement and the sense that he has been asked almost every question before, but he still gives you quite a fresh answer. It's quite tricky in that respect because at some point we want to get underneath that layer and see what's really there, and I don't think people have been able to.
Ian Bremmer:
Well, that is very AI-like.
Azeem Azhar:
That is very AI-like, yeah.
Ian Bremmer:
In the sense that, we passed the Turing test, right?He's basically done it. And yet you think you're talking to a human being, but you have no idea what motivates or drives that human being.
Azeem Azhar:
And I think that that's a case with the AI systems, and I think it is the case with Sam. For most people who observe him, what's the motivation? Is it money? Is it power? It's normally one of those two things and they're fungible. But Sam tells a different story. He talks about wanting to work on the most interesting problem with the most interesting people. And that may feel a bit Pollyannish to many of us, but that might actually be what's driving him. I don't know.
Ian Bremmer:
But it might not is what you're saying. You don't know.
Azeem Azhar:
We don't know.
Ian Bremmer:
Yeah, and that's a little unnerving. You know how sometimes we talk about the uncanny valley?
Azeem Azhar:
Yes, that's right.
Ian Bremmer:
You have someone that you think is like, it's really almost a human being, but somehow it doesn't quite connect and it unnerves you? In the way you've just described it. There's a bit of that. There's a little uncanny Altman going on.
Azeem Azhar:
He has exactly those attributes. I think he's deeply strategic. So the other thing that's going on in my head is that he's planned this through. He's planned the behavior. He's planned the interactions. He knows what he needs to say because he's very, very thoughtful. So he's quite a hard person to play this game of chess against.
Ian Bremmer:
Fair enough. Azeem Azhar, wonderful to see you.
Azeem Azhar:
Thank you so much, Ian.
Ian Bremmer:
That was Azeem Azhar on the advances in generative AI tools. To learn more about how they could be deployed, I sat down with organizational psychologist and general man of many things, Adam Grant. Here's our conversation.
Adam Grant, thanks so much for joining us today.
Adam Grant:
Don't thank me yet, Ian Bremmer.
Ian Bremmer:
Okay. I'll thank you to start just a little bit, a little bit. So you and I always kibitz constantly during these meetings. We rarely actually talk publicly. The background has been AI all the time, a lot of which feels like it is overdone, not the technology, but the people talking about it. The hype. What are the couple of myths that you'd like to dispel about AI?
Adam Grant:
Well, how many hours do you have?
Ian Bremmer:
I have five minutes.
Adam Grant:
Okay, done. I think the first myth is that it's going to replace people's jobs immediately. Most of the CEOs I've talked to have no idea even how it's affecting their workforces yet and aren't really planning more than a year ahead. I think the second thing is a lot of people are thinking it's going to augment skills for people who are already really successful. Most of the evidence says the exact opposite, that it's a skill leveler, that the worse you are at your job, the more you benefit from AI tools.
Ian Bremmer:
So you're going to use AI a lot?
Adam Grant:
I plan to have my books entirely written by AI. Never. No.
Ian Bremmer:
I like it. I like it. Okay. I also, at the same time that I've heard a lot about AI, I haven't heard a lot about DEI. I guess there are only so many acronyms that we can use in one year, but how much is this anti-woke backlash? How much of this not as relevant to global elites who are doing just fine in the marketplace? What do you think is driving it?
Adam Grant:
It is been a major conversation privately. I haven't seen a lot of public sessions about it, but I've talked to a lot of CEOs who have said, "I'm getting pushback on everything except for gender and disability. So race, ethnicity, LGBTQ. Huge challenges with the divided workforce, and I don't know what to do. I want to create opportunities for everyone. I want everyone to be respected and included. I also don't want to reverse discriminate against anybody who historically has been advantaged but may not be today, and I don't know what to do."
Ian Bremmer:
What are a couple things that are useful for them to think about as they try to navigate what's a very toxic, very tribal landscape?
Adam Grant:
Well, I think the place to start is we've got to expand opportunity right at the gate. So I've been recommending Textio, I don't know if you've seen them. They're a tool that audits job descriptions for inclusive language. So this is going to shock you, Ian, but if you post a job description for a software engineer and you say, "We're looking for ninjas and rock stars," women don't apply. Who would've thought? You apply the tool, you get rid of that kind of biased language, and not only do you get a more diverse applicant pool, you actually fill your jobs better with faster turnaround time. And I think that's a great example of something that doesn't hurt anyone that opens doors.
Ian Bremmer:
What I like about that is it's a concrete, small suggestion of something that people can actually do, because so much of the AI discussion, so much of the DEI discussion, is about lofty principles that are not helping people respond to the problem. Now, when you're in an environment where you have this many crises that you can't fix, my landscape, geopolitics, same thing. It feels like leaders are being less strategic. They're being buffeted, as you say, they're thinking forward maybe a year if they're lucky, but they're not really thinking about the long-term futures of your organization. Am I just making that up or is that reflected in your reality?
Adam Grant:
I'm seeing the same thing, and it's ironic because you come to Davos and you think this is the power center of the world. And yet for me, the overwhelming feeling is powerlessness. That you talk to CEOs who say, "I'm powerless to save democracy. I'm powerless to prevent climate change. I'm powerless to do anything about the AI changes that are about to transform my work in ways that I can't anticipate." And I think a lot of them don't know what to do in the long term. And so they're focused on quarterly returns and problems that are immediately solvable.
Ian Bremmer:
CEO is a job that is very highly compensated, but they don't last very long. They're working incredibly hard hours. They're increasingly not liked in their workforce. And as you say, they're running for very, very short-term gains. Is there any sense that maybe they should be trying to change some of those incentives, some of those motivations?
Adam Grant:
I've heard more discussion about it this year than ever in the past. So CEO burnout rates have gone up. We've also seen a growing disinterest in leadership jobs from Gen Z and a little bit millennials too, saying, "Why would I want to do that? It's a ton of responsibility. It's not clear how much good I can do. It's going to wreak on my life. And yeah, I'm getting more and more hate from pretty much any corner of the world when I make a big decision." I don't think anybody's thought systematically about how we change that dynamic, but I think one thing I would love to see is I'd love to see more co-CEO structures, which is not that common-
Ian Bremmer:
What's one of the best examples that people might've heard of?
Adam Grant:
Warby Parker's an easy one.
Ian Bremmer:
The eyeglass company?
Adam Grant:
Yeah. Neil and Dave co-founded the company together. They've been co-CEOs from day one. And there's actually some research showing that if you look at matched pairs of companies that are in similar industries with similar financial positions, co-CEO structures actually outperform when you look at their financial returns. And I think the job is just so big and so complex that it's hard for one person to have all the skills that you need. And this is a little more in your world, but I've wondered if we're going to have co-presidents, co-prime ministers one day.
Ian Bremmer:
I don't see any move towards that, in part because in political power, things are so much more zero-sum, right? It's hard to do coalitions in that environment. But, look, the war cabinet in Israel has a bit of that, right? In the sense that the only way you were going to get consensus is if you shared the responsibility. Now, Warby and Parker, I assume those aren't actually two people, or are they? No, I'm just making that up.
Adam Grant:
No, just branding.
Ian Bremmer:
Okay, but how do they actually share?
Adam Grant:
Well, they divide and conquer on some areas of expertise, kind of like you would expect a CEO to do with a president or a COO.
Ian Bremmer:
Is one of them really in charge though?
Adam Grant:
No, that's what's interesting about it. Yes. They actually sit down and make all their important decisions together, and they don't always agree, but at the end of the day, they want to be aligned. I know of at least one paper showing that if the power imbalance is too great, then the co-CEO structure doesn't work anymore. But I think what you can do is you can make the power balance shift by topic. So one CEO is going to lead a little bit more on marketing. The other is going to do it a little bit more on finance, and the hope is that it balances out.
Ian Bremmer:
Okay, you just gave me a really interesting structure that is potentially more resilient for the challenging times we have right now. Are there any sectors where you see that the businesses are not having the same level of toxicity, same level of problems in our economy today? And why might you say that is?
Adam Grant:
No.
Ian Bremmer:
None?
Adam Grant:
I can't think of one, honestly.
Ian Bremmer:
Wow. Because usually you think about, okay, we've got hedge fund bro culture, you've got tech bro culture, you've got finance guys, and everything's commoditized. But there are some others that you think, well, it's more of a team place. It's longer term. No.
Adam Grant:
I don't see any of this breakdown by sector. I think organizational culture dominates what might be occupational or what might be regional or might be technological.
Ian Bremmer:
Is it better or worse that you have a country like Japan that still has people that are committed to organizations for decades and decades? We know that it doesn't necessarily foster massive innovation and change, but it also makes people more comfortable where they are and who they are.
Adam Grant:
I think like almost everything in life, it's a double-edged sword. So you're right, there's a cost of lower innovation if people are stuck in the same job in the same place for too long. But it is nice to feel like your employer has your back. And I think that I've watched so many American employers screw this up. I'm so sick of, Ian, hearing CEOs say, "Our company is a family." No, a company is not a family. You don't fire your children. You don't furlough them in tough times. I think what they mean is that a company is a community, which is a place where you're going to be treated with respect, valued as a human being, but there are also performance standards. And if you don't meet those standards, you're not going to stay.
Ian Bremmer:
Okay. So on a scale of one to 10, how broken is capitalism right now? You are seeing organizational culture, business environment in the United States. Scale of one to 10, how broken?
Adam Grant:
Five.
Ian Bremmer:
Five. What is the single biggest thing that is most broken right now?
Adam Grant:
It's such a long list. It's hard to rank. If I had to pick one, I'd probably say right now just making sure that people have an opportunity to have a job that supports their life.
Ian Bremmer:
And the thing that can be done that will most address that in your view?
Adam Grant:
I was a skeptic on universal basic income. The research I've read suggests that at least creating a floor at the minimum wage level is something we ought to consider seriously.
Ian Bremmer:
A floor for everyone?
Adam Grant:
For everyone.
Ian Bremmer:
Everyone in the country.
Adam Grant:
I think it's an experiment we should run.
Ian Bremmer:
It's interesting. For some reason, it never seems to pick up significant political impulse. And is that because we don't want a welfare state in the United States? We're too focused on the individual. If you don't have a job, you don't mean anything.
Adam Grant:
Yeah, I think work is a source of status in America. Think about when you meet somebody, the first thing you ask them is what do you do?
Ian Bremmer:
In the US?
Adam Grant:
Yeah. That's not true in France at all or in many other countries. And it's so interesting that you define yourself by your job. I think we're going to end up in a world one day where that doesn't happen anymore. And at that point, we're going to have to start thinking seriously about how do we make sure that everybody has access to food, water, shelter, and work may not provide that for everyone.
Ian Bremmer:
Existential question around this. As a psychologist, if we didn't ask people, what do you do as the first question, what would you like the first question to be?
Adam Grant:
So from a psychological standpoint, I would say instead of what do you do, I want to know what do you love to do?
Ian Bremmer:
What do you love to do? And Adam, what do you love to do?
Adam Grant:
I love to share ideas.
Ian Bremmer:
Look at that, and I'm glad I could facilitate that for you.
Adam Grant:
Thanks for making it happen.
Ian Bremmer:
Be good, my friend.
That's it for today's edition of the GZERO World Podcast. Do you like what you heard? Of course you did. Well, why don't you check us out at gzeromedia.com and take a moment to sign up for our newsletter. It's called GZERO Daily.
Speaker 3:
The GZERO World Podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate, and the only end-to-end solutions platform addressing the critical initiatives of global logistics today. Learn more at prologis.com.
This podcast is also brought to you by the feature film One Life. One Life tells the incredible true story of Nicholas "Nicky" Winton, a young man who helped rescue hundreds of predominantly Jewish children from Czechoslovakia in a race against time before Nazi occupation closed the borders on the verge of World War II. 50 years later, Nicky, played by Sir Anthony Hopkins, is haunted by the fate of those he wasn't able to bring to safety. Also starring Helena Bonham Carter and Jonathan Pryce, Variety calls One Life "stirring, a testament to the power of good." And the Daily Beast says "Hopkins gives us stunning performance." Only in theaters March 15th.
GZERO World would also like to share a message from our friends at Foreign Policy. Global Reboot, a podcast from Foreign Policy Magazine, was created as countries and economies emerged from the pandemic and called for a reboot. On each episode, host and foreign policy editor-in-chief, Ravi Agrawal, asked some of the smartest thinkers and doers around to push for solutions to the world's greatest problems, from resetting the US-China relationship to dealing with the rise of AI and preserving our oceans. Find Global Reboot in partnership with the Doha Forum wherever you get your podcasts.