Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
What country will win the AI race?
Art: Courtesy of Midjourney
Savvy startups, tech giants, and research labs woo the best engineers and financing to fuel technological breakthroughs. But the battle for AI supremacy is much bigger than the industry itself – it's a global contest, pitting nations against each other.
Many of the world’s most powerful governments are flexing their muscles to build a competitive edge by cultivating robust domestic AI sectors. Don’t be fooled into thinking that recent efforts to legislatively rein in AI models and the companies behind them are signs of governments hitting the brakes – it’s quite the opposite.
Why, you ask? Because it’s a boon for any country to attract top talent and spur economic activity, says Valerie Wirtschafter, a fellow at the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative. Hosting top AI companies also “inevitably catapults host countries to the forefront of conversations around standards and governance, both domestically and internationally.”
Beyond that, a thriving AI sector can do wonders for national security. That’s true not only for military and intelligence applications or research-and-development, but also for ensuring that standards of development “do not pose an inherent risk and are developed with a certain set of values in mind,” Wirtschafter says.
Since Google, Microsoft, and OpenAI call America home, Washington has the ultimate power play. It can better control these tech giants and set the vibe for worldwide AI regulation.
Such control sets governments an inch closer to technological sovereignty, says Nick Reiners, a senior analyst for geotechnology at Eurasia Group: “Having these companies in your country means you’re not dependent on another country.”
Governments can boost their AI sectors in numerous ways — through subsidies, research funding, infrastructure investment, and government contracts.
“Defense spending and government R&D has always been a big stimulus for civilian and commercial research and product development,” says Scott Wallsten, president and senior fellow at the Technology Policy Institute, a Washington-based think tank. “You can be sure the DOD is working on these tools for their own purposes because they’re in an arms race with potential adversaries.”
Who’s ahead? The US and China are way out in front. “While in the US, these advances have been primarily driven by the private sector, in China they have been shaped more by government support,” says Wirtschafter. But she notes that the US CHIPS Act is a sign that America is trying to boost its strategic advantage.
Stanford University’s annual AI Index report found the US and China leading in many different ways, including private investment and newly funded AI firms. (The UK, EU, Israel, India, and Canada also rank highly in many of the report’s metrics.)
While it’s unlikely that anyone will challenge the US and China, and the US is ahead, Wirtschafter notes that China is powerful on facial recognition technology.
Could governments get possessive? Yep, this is a high-stakes game, and Washington and Beijing, among others, could increasingly opt for protectionist measures to keep powerful AI models in their grasp.
The US is already doing this with chips, the underlying technology for AI. Washington exerts strict export controls over any semiconductor-related equipment, lest it get into enemy hands – meaning China. It has also blocked corporate takeovers that could shift the balance of power with chips, including a 2018 deal involving US chipmaker Qualcomm (keeping it from a Singapore-based company’s grasp). And a new report indicates the Biden administration forced a Saudi firm to divest from a US chipmaker linked to OpenAI CEO Sam Altman.
If the US and other governments determine that protecting powerful AI models is key to their national security, they could take similarly drastic measures to keep them domestic — or at least in the hands of allies. Just last week, Bloomberg reported that the London-based AI startup Stability AI, known for its Stable Diffusion image generator, is exploring a sale amid internal turmoil. The company reportedly reached out to two startups — the Canadian company Cohere and the US-based Jasper — to gauge their interest in a sale. There’s no indication yet that regulators are worried, but the potential corporate shakeup comes as British politicians have been desperately trying to make the UK a friendly place for AI firms.
The last thing the UK wants is to get burned again – like it did with DeepMind and Arm, two promising British AI companies that were acquired by US and Japanese firms in 2014 and 2016, respectively. In a recent interview with the BBC, Ian Hogarth, who is leading the UK’s AI taskforce, spoke of the need to boost European technology companies instead of allowing them to be sold. “We've had some great tech companies and some of them got bought early, you know – Skype got bought by eBay, DeepMind got bought by Google,” Hogarth said. “I think really our ecosystem needs to rise to the next level of the challenge.”
British lawmakers passed the National Security and Investment Act in 2022, granting the government new national-security powers to intervene in the foreign acquisition of domestic companies. “The pace of change has been really significant since that period,” Wirtschafter said of the DeepMind acquisition, “and the desire to maintain a competitive national position in this space would be central to any potential sale.” The UK’s National AI Strategy, published in 2021, says that the government will “protect national security” and protect against “potentially hostile foreign investment.”
But ministers are now considering rolling back those new rules to appear more business-friendly. And that’s the central tension that all AI-hungry countries face: They need to appear AI-friendly while trying to be forceful with regulation. The battle for AI supremacy is on the line.