Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation

Two men focusing on their laptop computers with the GZERO World with Ian Bremmer - the podcast logo

TRANSCRIPT: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation

Zeynep Tufekci:


What I fear is not being discussed is the ways in which artificial intelligence allows certain things to be done cheaper and at scale, rather than looking at what happens between you and me, if it is used, what happens if it's used by a billion people?

Ian Bremmer:

Hello and welcome to the GZERO World Podcast. I'm Ian Bremmer, and on today's show I'm bringing you a conversation I had on the sidelines of the Paris Peace Forum with sociologist and all around brilliant human being, Zeynep Tufekci. Tufekci has been prescient on a number of issues from COVID to misinformation online, and as you'll hear, I caught up with her outside, so pardon the traffic, but it's fun. It's got some oral background to it. I asked her what people are missing when they talk about artificial intelligence today, and her answer surprised me, not because I don't agree with her, but because it seems so obvious in retrospect. So let's get to it.

Speaker 3:

The GZERO World Podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate, and the only end-to-end solutions platform addressing the critical initiatives of global logistics today. Learn more at prologis.com. This podcast is also brought to you by Bleecker Street and LD Entertainment presenting ISS, when war breaks out on earth between the US and Russia, astronauts aboard the International Space Station fight each other for control. This sci-fi thriller is only in theaters January 19th.

Ian Bremmer:

Zeynep Tufekci. So good to see you.

Zeynep Tufekci:

Thank you.

Ian Bremmer:

Artificial intelligence. Big topic right now. Tell me what most concerns you about the way it's being discussed by those that matter.

Zeynep Tufekci:

What I fear is not being discussed is the ways in which artificial intelligence will be used because it allows certain things to be done cheaper and at scale. I think there's a lot of emphasis and discussion on things like what they call artificial general intelligence, on things like whether AI will one day be smarter than an average human being.

Ian Bremmer:

The existential risks that are out there.

Zeynep Tufekci:

Well, not even that. Not even that, because whether or not AI becoming smarter brings existential risk, that's the other question. But whether or not AI becomes even smarter than humans on any one domain, it's an interesting question. There are things to think about, but there are a lot of things that other creatures or calculators already do better than human beings, and that's not really the key pivot around which society changes. The key pivot is whether AI enables things that were otherwise difficult or too expensive or too onerous to do at scale by corporations and by governments, and whether it brings new ways of inferring, controlling, manipulating, making decisions once again at scale.

So rather than trying to see does this pass the bar exam? Well, okay, whatever, that's an interesting question. We should look at things like, well, if we start using this system to try to decide who gets benefits or some other legal question, it doesn't have to have passed the bar exam or not for it to be deployed at scale to do that. I'm not even saying it would be bad for it to be deployed at scale, but what's not happening is that rather than looking at what happens between you and me, if it is used, what I would like to see discussed is what happens if it's used by a billion people, because that's a different question.

Ian Bremmer:

Give us a way that AI might prove to be potentially transformative at scale that we truly have not anticipated adequately.

Zeynep Tufekci:

Okay, so I will give you an example from hiring. There's a lot of evidence that hiring historically has been biased. People hire people like them. People hire from their alumni networks disproportionately, which seems like a positive, but if you're not in those networks, it's a problem. There are racial biases, there are gender biases. So we know all of that. So there's a great push to use more automated systems to do hiring to help with the human biases, which I'm there, this is good. And people have discovered that when you train it on the human data, it picks up some of those human biases.

Fine, but you know what? It might even be easier to fix than humans, right? Because if humans are biased, sometimes convincing them is harder. So even with, well, with AI, at least we can deliberately say, "Oh, we detected a bias and use it as an opportunity to correct it and we could end up with more diverse workforce." So sounds great. Right? So here's an example of the problem we could be facing. Computational systems. AI systems have been shown to be able to infer things, which means they can deduce from your data things that we as a human being could not. For example, they have been used...

Ian Bremmer:

Facebook, for example, showing that people are gay when otherwise they wouldn't be able to necessarily know.

Zeynep Tufekci:

People prone to chronic depression, people with compulsive gambling problems who might be prone to getting pregnant in the next year before they're either pregnant or depressed or any of that, or people who are introverted or people... There's all these not obvious in a job interview things that once it looks at all your data, it could infer, for example, we use AI to hire, we fix the gender race bias by deliberately intervening. We know already humans do it, everything looks great. We got racial diversity, we got gender, everything looks great. And then we discover everybody prone to the chronic depression has been weeded out by the computer without us even knowing that's what it was doing because we asked it to match our existing workforce or existing high producers or whoever we had. Or we had a lot of very aggressive, unpleasant people and we just said, "Match this." It just matched this because...

Ian Bremmer:

So because regulations as they stand...

Zeynep Tufekci:

Would not match this.

Ian Bremmer:

Would not match that because you wouldn't even know that you're discriminating.

Zeynep Tufekci:

And in fact, before AI, it wasn't really possible to do this at scale. You might've said, "I am going to eliminate everybody prone to chronic depression", which would not be a good thing, which would be a terrible society if we did that, but you couldn't really infer from just a job interview. Whereas we know from scientific studies, just people's Instagram posts, the AI can infer that and it's not something easy because they take the same things and if you show it to people, people can't eyeball it. So it's not something as simple as they're posting gray colors or they're posting their despair.

Ian Bremmer:

No, there are underlying patterns that come from crunching massive data.

Zeynep Tufekci:

There are some things it's detecting, like for example, can AI weed out people who might be prone to unionizing, the ones who will be whistle-blowers? These sound science fiction-y until you start reading the literature and you're like, wait, it can do all of this. And whether or not it can do this 100% or whether or not it can do this better than a person doesn't really matter because if it can do it 80% of the time and it just costs 10 cents...

Ian Bremmer:

Then you're going to use it.

Zeynep Tufekci:

They're going to use it. Whereas in the past, if you wanted to find the potential whistle-blowers or union organizers, you had to spend like $10,000 investigating each person, you are not going to use it. So I think lowering of price, expanding of capacity at the hands of the powerful is really an underrated risk and much more likely to happen compared to Arnold Schwarzenegger coming down the corridor from the future. Yeah.

Ian Bremmer:

That's a fascinating kind of risk, and I really like the example. Now let me ask you about a different kind of risk. What about the risks that come from deployment of AI tools in the hands of billions of people?

Zeynep Tufekci:

Depends. Just like the internet, there are lots of things you can think that are positive. Once people have more access to information, most people have more access to... I like the translation capacities. I grew up not speaking English, so the idea that people around the world can now use translation tools to read papers, things, that sounds great to me. On the other hand, let's say we deploy facial recognition at a mass scale, so all of a sudden you can go around with a camera and get people's names immediately and then connect it to other data about them, that's genuinely scary. Because if you lose completely the right to be anonymous in public, in a way that would empower people, stalk, harass, follow, just make you feel self-conscious. None of that is good.

Ian Bremmer:

Is that becoming inevitable?

Zeynep Tufekci:

No, of course not. Of course not. See, this is the thing. It is not inevitable because I think what is inevitable is that there will be technology to do facial recognition because the technology is not that complicated. What we're calling AI right now, especially machine learning models, the science behind it has been around since World War II. It's not very complicated. What changed is that the method needed data to eat. It didn't have a lot of data, so it didn't work, what changes the big data, and once we have the big data, it will happen. It'll be doable and others will do it. So we can't really say, "Oh...", it's not like nuclear weapons where you can try to gatekeep, say, enriched uranium and certain things.

Ian Bremmer:

Yes, the tech is going to be there, it's going to be there.

Zeynep Tufekci:

Yes, but there are a lot of technologies that is there that we just don't allow. We just say, "You cannot put facial recognition software on phones. You cannot have facial recognition databases. You cannot do that." We can easily regulate and say that, and people say, "Well, can we really do that?" I'm like, "Yes, we can." Just look at how many things we can't put into food. You cannot put heavy metals into food. You cannot use certain kinds of things in paint, heavy metals, there are things you cannot put in the air. It keeps the public safe, and I feel like that's the job of a regulator. There are things, that if unchecked, could be used widely to lower prices of products or to do something, we just say "No", because as a society that's not a price worth paying. I would make facial recognition, for example, of those things.

Ian Bremmer:

One of things. So, close with something positive. In terms of the direction of regulation so far...

Zeynep Tufekci:

Yes.

Ian Bremmer:

What's the thing you've seen or what's the actor around the world that you've seen that gives you greatest cause for hope?

Zeynep Tufekci:

The greatest cause for hope I have is how many are trying to act. So we had the executive order in the United States. We have other processes. Do I like any one of those as they are? They have pluses. There's a lot of good things.

Ian Bremmer:

No, no, I'm not asking holistically. I'm asking for something, a piece of the regs that you think are, I think they're getting this piece of AI or they're on track to get this piece right.

Zeynep Tufekci:

You know what? I'm going to take inspiration now from the corporate process is that they don't really try to prejudge what's good. What they do is they iterate. They throw something out there. So I'm really hopeful and feel great about the thing that everybody's trying to get something out there than any one piece of it. I don't think there's any one piece of it that stands out to me, I was like, "That's going to get it under control." But the fact that everybody's rushing to get something out there and they seem to understand that you can't just throw it out there and then sleep on it because you have to throw it out there and then see how it works. Because I don't think any of us are in a position with crystal balls to say, "This is exactly what's going to happen and just regulate the models", or I don't think any of that is going to... I don't even presume I or anyone has that vision, but so many governments rushing before the window closes because I was talking to Prime Minister, Jacinda Ardern, last week by...

Ian Bremmer:

Former New Zealand PM, we had on the show.

Zeynep Tufekci:

Yes. And she had the Christchurch call after the Christchurch massacre.

Ian Bremmer:

And immediately, it was done within days.

Zeynep Tufekci:

So I talked to her about what she had learned, and one of the things she said, which is very true, is that it is hard to retrofit guardrails because what companies do is they create facts on the ground and then they say, "We have facts on the ground and we make money." And then governments become reluctant because they're making money even though they're not aligned with democracy or human values. So I think this time there's a real political will to not let corporate driven facts on the ground run the show. It's always hard to regulate complex things, but the fact that there's so much understanding that we're going to try and we're going to be there as the representative of the people, and if we don't get it right, we will try again, is a lot more hopeful to me than any one of the regulations that I'm like, "Okay, let's try this. Let's try this. Let's do something."

Ian Bremmer:

Zeynep Tufekci, thanks for joining us.

Zeynep Tufekci:

Thank you for inviting me.

Ian Bremmer:

That's it for today's edition of the GZERO World Podcast. Do you like what you heard? Of course you did. Why don't you check us out at gzeromedia.com and take a moment to sign up for our newsletter? It's called GZERO Daily.

Speaker 3:

The GZERO World Podcast is brought to you by our lead sponsor, Prologis. Prologis helps businesses across the globe scale their supply chains with an expansive portfolio of logistics real estate, and the only end-to-end solutions platform, addressing the critical initiatives of global logistics today. Learn more at prologis.com. This podcast is also brought to you by Bleecker Street and LD Entertainment presenting ISS, when war breaks out on earth between the US and Russia, astronauts aboard the International Space Station fight each other for control. This sci-fi thriller is only in theaters January 19th.

GZERO World would also like to share a message from our friends at Foreign Policy. Global Reboot, a podcast from Foreign Policy Magazine was created as countries and economies emerged from the pandemic and called for a reboot. On each episode, host and foreign policy Editor-in-Chief Ravi Agrawal asked some of the smartest thinkers and doers around to push for solutions to the world's greatest problems. From resetting the US-China relationship to dealing with the rise of AI and preserving our oceans, find Global Reboot in partnership with the Doha Forum wherever you get your podcasts.

Prev Page