Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI & human rights: Bridging a huge divide
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.
And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.
And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.
- Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- New AI toys spark privacy concerns for kids ›
- Emotional AI: More harm than good? ›
- Singapore sets an example on AI governance ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Making rules for AI … before it’s too late ›
Can we control AI before it controls us?
COVID has accelerated our embrace of the digital world. The thing is, we don't always know who’s running it.
Instead of governments, Ian Bremmer says, so far a handful of Big Tech companies are writing the rules of digital space — through computer algorithms powered by artificial intelligence.
The problem is that tech companies have set something in motion they don't fully understand, nor control.
But China does. Beijing is using AI do some pretty bad stuff, such as surveillance of Uyghurs in Xinjiang (and also some fun stuff, like publicly shaming jaywalkers).
Will we learn to control AI before AI controls us? Find out on GZERO World.
- Kai-fu Lee: What's next for artificial intelligence? - GZERO Media ›
- Artificial intelligence from Ancient Greece to 2021 ›
- Democrats and Republicans unite! At least against China. - GZERO ... ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
- Ian Bremmer: How AI may destroy democracy - GZERO Media ›
- AI at the tipping point: danger to information, promise for creativity - GZERO Media ›
- The AI arms race begins: Scott Galloway’s optimism & warnings - GZERO Media ›
- Ian Explains: The dark side of AI - GZERO Media ›
- Tech innovation can outpace cyber threats, says Microsoft's Brad Smith - GZERO Media ›
- Artificial intelligence and the importance of civics - GZERO Media ›
Can China limit kids’ video game time? Risks with facial recognition
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
China is to ban kids from playing video games for more than three hours a week. But why and how?
Well, controlling the time that kids spend online fits in a pattern of growing paternalism from a state that wants to control its population in every possible way. This time around, the gaming industry is made responsible for enforcing the time limits in China that foresee in a true diet of gaming; one hour per day on Friday, Saturday, or Sunday. And of course, children are vulnerable. Protecting them from addictive and violent activities can be a very wise choice that parents want to make. There are also laws in a number of countries that limit advertisements that target children, for example. But whether the latest restrictions on gaming in China will work or instead will inspire a young generation to learn of clever circumvention remains to be seen.
With US agencies looking to expand the use of facial recognition tech, what are the security concerns?
Well, I see moral, legal, and operational concerns around the mass deployment of facial recognition systems. For one, they do not always work accurately and even if they do, privacy rights are at stake for all. But with false positives, innocent people end up being targets. And there's a real risk of mission creep. Black men specifically but other minorities more generally, end up being particularly vulnerable to the bias, misidentification, and abuse of facial recognition systems. So from the point of view of their safety, these systems should not be used.
What are the concerns with facial recognition technology?
Nicholas Thompson, editor-in-chief of WIRED, discusses technology industry news today:
What are the concerns with facial recognition and will IBM's decision to no longer offer the tech mark the end of its use?
The concern is that the technology is racially biased. It's better at picking out white faces than black faces. Another concern is it could be abused by authorities like the police, who have a lot of power, they can immediately identify who everyone is. Will IBM dropping out end it? No, IBM was kind of far behind on this technology.
Has the technological decoupling between the US and China accelerated since COVID-19?
A little bit. But what's really happened is the political decoupling has gotten much worse. And over time, the political break will lead to a greater technological break.
As our work-from-home situations continue, has COVID-19 changed the workplace forever, even after the pandemic?
Yes, it will absolutely change the way we work in the near future. We'll have lines marking how close we can get to our colleagues. And in the long run, I think many more people are going to work from home and the whole nature of what an office means will change.
Amazon's Facial Recognition Problem: Tech in 60 Seconds
Should Amazon stop selling its facial recognition technology to law enforcement?
Probably. There's a big problem with its facial recognition technology where it has a harder time identifying people of color and women. It should surely solve that problem before it sells it to law enforcement or else it's going to get a lot of trouble.
Will Snap's move into games be a success?
Hard to say. So the problem with Snapchat is that they make really good products. They see the future, but they have a hard time building stuff that Facebook and Instagram can't copy. So, my guess? This is probably going to be a good product. Will they be able to make money off of it? We'll see.
Should I turn off Bluetooth when I'm not using it? Why?
Yes! Turn it off. Bluetooth is very susceptible to hackers. So there is a real risk. Toggle the little switch. Turn it off.
Is Instagram influence just as good as cash?
No. If you are a hotel or a restaurant and somebody shows up and they have two thousand Instagram followers and they say, "I'm an influencer give me a free meal! Say no."
And go deeper on topics like cybersecurity and artificial intelligence at Microsoft Today in Technology.
Tech in 60 Seconds: 10-Year Challenge
The 10-year challenge might actually be an attempt to improve facial recognition technology.
It's Tech in 60 Seconds with Nicholas Thompson!
And go deeper on topics like cybersecurity and artificial intelligence at Microsoft Today in Technology.