Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
Governments sniff around Microsoft’s OpenAI deal
The PC giant says that $13 billion hasn’t bought it functional control over the ChatGPT parent company because OpenAI is technically run as a nonprofit. Instead of receiving equity in the company, Microsoft gets about half of OpenAI’s revenue until its investment is repaid.
But the power of the board has been in the spotlight in recent weeks. OpenAI’s nonprofit board fired Sam Altman, CEO of the for-profit arm of the business, but that decision was reversed after a pressure campaign from Altman, employees, and Microsoft. After days of public turmoil, Altman was reinstated, board members resigned, and Microsoft — which never had a seat on the board — gained a non-voting, observer seat. If anything, Microsoft gained more power out of the ordeal.
Some experts say the FTC has authority here, even though Microsoft is simply invested in OpenAI and didn’t buy it outright. “The Clayton Antitrust Act and the FTC Act – the laws that the FTC can enforce – aren’t limited to scrutinizing outright mergers,” says Mitch Stoltz, the antitrust and competition lead for the Electronic Frontier Foundation. “They also cover acquisitions of any amount of capital in a competitor if the effect is ‘substantially to lessen competition, or tend to create a monopoly.’”
Microsoft’s partial ownership could “soften” competition between the two firms, according to Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. “That includes influencing decision-making through voting rights on either board or by sharing sensitive information between the two firms,” she says. “I think the bigger picture here is that AI is viewed as an important technology and, therefore, competitive markets for AI are vital.”
Britain’s antitrust regulator has power here too. “If US companies do business in another country, then the antitrust and competition laws of that country apply,” says Moss, formerly the president of the American Antitrust Institute.
Antitrust is having a moment as a tool for wrangling Big Tech. In recent years, regulators have strived to block and undo anti-competitive mergers – they succeeded in the case of Meta’s purchase of Giphy and failed in the case of Microsoft’s purchase of Activision. But they’ve also sued tech firms over alleged abuses of monopoly power: Federal prosecutors are litigating ongoing antitrust cases against Amazon, Google, and Meta after years of treating these companies with kid gloves.
These antitrust probes are still preliminary, but they represent the first real AI-related legal challenges at a time when there’s a global appetite to stop big tech companies from getting unfair advantages in emerging markets. Next steps, should regulators decide to move forward, would be formal investigations.
Trudeau vs. Big Tech, round three
The stated aim of the process, which applies to services with more than $10 million in annual revenue in Canada, is to ensure that streamers “make meaningful contributions to Canadian and Indigenous content.” This may mean imposing obligations on them similar to those that traditional broadcasters meet as a part of the cost of doing business in Canada.
But registration is just the first step, and it is not clear what the final regulations will require, leading critics of the Trudeau government to worry that Big Brother will soon be deciding what content Canadians can consume online. Elon Musk and Glenn Greenwald said on X that it was part of an effort to crush free speech in Canada, and Conservative Leader Pierre Poilievre denounced it as “power-hungry woke bureaucracy.”
Heritage Minister Pascale St-Onge says that’s all nonsense, portraying it as an updated version of Canadian content rules, which have long been used in Canada to nurture a domestic cultural industry. While likely true, critics are concerned that the process could eventually open up podcast distributors, for example, to content controls. Until the regulator takes the next steps, months from now, it will likely be hard to rule out those concerns.
The Trudeau government is engaged with struggles with the tech titans on several fronts as it attempts to impose European-style regulatory order on unregulated cyberspace while critics, both domestic and foreign, offer fierce resistance.
Cultural policy is traditionally treated as a carve-out in the Canada-US trade relationship, so the Biden government is unlikely to pressure Canada over this. Still, it is one more irritant between the Trudeau government and rich and powerful Amercian tech companies. Members of Congress, meanwhile, recently warned Canada that it will face consequences if it proceeds with plans to impose a 3% digital services tax on the Canadian revenue of tech giants. Canada has not backed down and plans to levy the tax beginning Jan. 1, 2024.
What We’re Watching: Fiery rhetoric and a Ukraine “peace plan,” Israel’s economy v. judicial reforms, SCOTUS social media cases
Dueling speeches on Ukraine
A lot of players (and potential players) in the war on Ukraine have used the looming one-year anniversary of the invasion to position themselves for the months ahead. On Monday, President Vladimir Putin used his annual state of the nation address to insist that Russia would continue to fight a war he blames on Western aggression, and he announced that Russia would suspend participation in the New START nuclear arms control treaty, which binds Russia and the United States to limit their strategic nuclear stockpiles and to share information and access to weapons facilities. (Note: Inspections have already been suspended for more than a year, and Russia is in no position to finance a new arms race.) President Joe Biden, meanwhile, followed up his surprise visit with Volodymyr Zelensky in Kyiv by meeting in Warsaw with Polish President Andrzej Duda and asserting during a speech that “Appetites of the autocrat cannot be appeased. They must be opposed. Autocrats only understand one word: no, no, no.” In listing what he called Russia’s “atrocities,” he said its forces have “targeted civilians with death and destruction; used rape as a weapon of war… stolen Ukrainian children in an attempt to steal Ukraine's future, bombed train stations, maternity hospitals, schools and orphanages.” Chinese President Xi Jinping is expected to make news on Friday with a speech of his own in which he’ll lay out the specifics of a peace plan which, given the distance between the Russian and Ukrainian positions, has virtually no chance of success. The war grinds on.
Israel’s shekel drops amid judicial shakeup
A day after the Knesset, Israel’s parliament, passed the first stage of a bill reforming the judicial system, Israel’s currency, the shekel, dropped 2% against the greenback – the lowest value against the US dollar since 2020. Making matters worse, depreciation of the currency comes as the country is already grappling with sky-high inflation, with the central bank recently raising interest rates for the eighth time in less than a year. For weeks, Israeli bankers and business leaders have warned that Netanyahu government’s proposed changes to the judiciary, which include stripping the power of the High Court to override government legislation, would make the country less attractive for direct foreign investment. Indeed, HSBC – the world’s fourth largest bank – recently sent a letter to investors saying that the proposed reforms would harm both foreign investment and capital markets in Israel. This comes as a new poll found that 17% of Israelis are thinking about taking their savings out of Israel. Netanyahu and his right-wing cabinet say they aren’t backing down, but will that change if Israel’s economy continues to suffer and protesters continue to shout?
SCOTUS appears hesitant to crack down on social platforms
On Tuesday, the US Supreme Court began considering whether social platforms can be held responsible for harmful content promoted by their algorithms in Gonzalez v. Google, one of two cases the justices are hearing this week that may affect how social media platforms moderate content. But the justices made clear that they are unlikely to issue a sweeping decision limiting protections for YouTube, a Google subsidiary, any time soon, indicating that drawing the line on regulation is a slippery slope that should be considered by Congress. Some quick background: This case was brought by the family of Nohemi Gonzalez, a 23-year-old exchange student killed in an ISIS attack in November 2015 in Paris that also targeted the Bataclan theater. They argue that YouTube used data it collected on its users to push ISIS-related content to interested parties. At the crux of the legal battle is whether algorithms, which affect almost every online interaction, are legally protected under Section 230, a 1996 provision that says interactive service providers are not legally considered publishers of information posted by users on their sites. Both Republicans and Democrats have criticized the provision for different reasons, but efforts to revise it have stalled in Congress. Google, for its part, argues that it is legally absolved from content promoted on its platforms as it is not a publisher. The debate continues Wednesday when the Supreme Court will hear another case, Twitter v. Taamneh, looking at whether social platforms can be liable for aiding and abetting acts of international terrorism.
The path to holding social media companies accountable
Facebook whistleblower Frances Haugen thinks governments need to rethink how they regulate social media companies to hold them accountable for the consequences of their actions.
Instead of laws banning specific stuff, which lawyers are very good at skirting, governments should develop legislation that opens conversations about potential problems.
"That's an ongoing, flexible approach to trying to direct them back towards the common good," she tells Ian Bremmer on GZERO World.
Also, Haugen says we must recognize that the gap between fast-changing tech and slow-moving governments will continue to widen. To narrow it, we'll need more whistleblowers — and better laws to protect them.
Watch the GZERO World episode: Why social media is broken & how to fix it
- US in 2022: Smarter social media, more housing & living with COVID ›
- Be more worried about artificial intelligence - GZERO Media ›
- What is Section 230, the 90's law governing the internet? - GZERO ... ›
- Facebook whistleblower Frances Haugen says social media ... ›
- Toxic social media & American divisiveness - GZERO Media ›
What happens in Europe, doesn’t stay in Europe — why EU social media regulation matters to you
The EU just approved the Digital Services Act, which for the first time will mandate social media companies come clean about what they do with our data.
Okay, but perhaps you don't live there. Why should you care?
First, transparency matters, says Facebook whistleblower Frances Haugen.
Second, she tells Ian Bremmer on GZERO World, the EU is not telling social media firms exactly how to change their ways — but rather saying: "We want a different relationship. We want you to disclose risks. We want you to just actually give access to data."
And third, Haugen believes that if it works in Europe, the DSA will help shape law in other parts of the world too.
Watch the GZERO World episode: Why social media is broken & how to fix it
- The next great game: Politicians vs tech companies - GZERO Media ›
- QR codes and the risk to your personal data - GZERO Media ›
- EU & US: democracy frames tech approaches; Australia & Facebook ... ›
- A “techlash” is coming this year - GZERO Media ›
- Was Elon Musk right about Twitter's bots? - GZERO Media ›
- Whistleblowers & how to activate a new era of digital accountability - GZERO Media ›
Limiting Putin's propaganda: Big tech & the Russia-Ukraine war
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses the Ukraine conflict from the cybersecurity perspective:
If you're like me, you've been glued to the news all week after Russia invaded Ukraine to understand what is happening on the ground and how the democratic community is responding. We've seen tectonic changes already in this past week, and we could say the same for Big Tech.
How is the Russia-Ukraine war testing the role of Big Tech?
Well, I do think we see their outsized power revealed once more. We saw Putin restricting access to platforms like Facebook, as he is losing grip over his propaganda narrative. But then also social media companies finally being forced to stop amplifying state propaganda channels of Russian media in the EU, due to new sanctions. But the fact that the platforms are not doing the same in the US and other jurisdictions says a lot about their reluctance. And there's also a problem with executing their own corporate policies. New research shows that Facebook fails in 91% of cases to correctly label content when it is Russian state sponsored. It's very messy.
What is social media's role in the Russia-Ukraine war?
Well, we've seen a lot of clips, and I've been quite impressed with how the Ukrainian side has seemed to be one step ahead each time. President Zelensky was addressing the world in response to rumors that he had actually fled the country, to show that none of that was true. And there are also clips of how captured Russian soldiers, often looking like teenagers, are fed while calling their mothers crying, and it paints a picture of how young boys are sent into the battlefield without a clue of what they were sent to do.
But having said all this, it's only been a week of this unjust war and a lot will still have to be researched more deeply. So we will keep you posted.
US pushes back on EU's proposed laws impacting US tech companies
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
What are the EU's digital gatekeeper rules, and why does the US want them changed?
Now, the EU is working on a series of legislative proposals, for example, to ensure risk mitigation around the use of AI, or the protection of fundamental rights, but also to make sure that there is fairness and competition in the digital economy. And the Digital Services Act still under negotiation between the European Commission, member state governments, and the European Parliament, seeks to impose proactive obligations on large gatekeeper tech companies, to basically extend antitrust principles and protect smaller players.
And now at the eleventh hour, the Biden administration through Commerce Secretary, Raimondo, but also a number of senators, is voicing its concern. The political leaders worry that the EU rules would discriminate unfairly against American tech companies and really single them out. But what's easily overlooked in their statements is that US-based tech companies have grown exceptionally large, and that a law that wishes to put specific obligations on the largest companies would inevitably include many American companies.
So you might think of the situation we find ourselves in as the consequence of their success. But besides that, concerns about the outsized power of a handful of monopolists is not unique to Europe. Americans also worry about harms caused by competition that is lacking or harms to society, for example, democracy or the protection of minorities. So it may be better for the US political leaders to prioritize focusing on those voices, instead of writing papers to Europeans that have nearly finished their years long deliberations.
- Europe and the US can't agree on how to regulate Big Tech ... ›
- Big Tech's big challenge to the global order - GZERO Media ›
- The technopolar world: A new dimension of geopolitics — Kevin ... ›
- A “techlash” is coming this year - GZERO Media ›
- EU's proposed DSA and DMA laws would broadly regulate digital ... ›
- Elon Musk to buy Twitter: will misinformation thrive? - GZERO Media ›