February 29, 2024
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
More For You
Geoffrey Hinton, the ‘Godfather of AI,’ joins Ian Bremmer on the GZERO World podcast to talk about how the technology he helped build could transform our lives… and also threaten our very survival.
Most Popular
Think you know what's going on around the world? Here's your chance to prove it.
Reform UK leader Nigel Farage holds a post-budget conference in London, United Kingdom, on Nov. 26, 2025.
Phil Lewis/WENN
Nigel Farage, the far-right UK leader, reportedly told donors that he plans to join forces with the center-right Conservative Party ahead of the next election. Right-wing groups in other parts of Western Europe have largely avoided making such an alliance.
© 2025 GZERO Media. All Rights Reserved | A Eurasia Group media company.
