Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Is Silicon Valley eroding democracy? A Q&A with Marietje Schaake
Marietje Schaake has watched Silicon Valley for years, and she has noticed something troubling: The US technology industry and its largest companies have gradually displaced democratic governments as the most powerful forces in people’s lives. In her newly released book, “The Tech Coup: How to Save Democracy from Silicon Valley,” Schaake makes her case for how we got into this mess and how we can get ourselves out.
We spoke to Schaake, a former member of the European Parliament who serves as international policy director at the Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She is also a host of the GZERO AI video series. This interview has been edited for clarity and length.
GZERO: How do private companies govern our lives in ways that governments used to — and still should?
Schaake: Tech companies decide on civil liberties and government decision-making in health care and border controls. There are a growing number of key decisions made by private companies that used to be made by public institutions with a democratic mandate and independent oversight. For-profit incentives do not align with those.
When tech companies curate our information environments for maximum engagement or ad sales, different principles take priority compared to when trust and verification of claims made about health or elections take precedence. Similarly, cybersecurity companies have enormous discretion in sharing which attacks they observe and prevent on their networks. Transparency in the public interest may mean communicating about incidents sooner and less favorably to the companies involved.
In both cases, governance decisions are made outside of the mandate and accountability of democratic institutions, while the impact on the public interest is significant.
Why do you present this not merely as a new group of powerful companies that have become increasingly important in our lives, but, as you write, as an “erosion of democracy”?
The more power in corporate hands that is not subject to the needed countervailing powers, the fewer insights and agency governments have to govern the digital layer of our lives in the public interest.
Why do you think technology companies have largely gone unregulated for decades?
Democrats and Republicans have consistently chosen a hands-off approach to regulating tech companies, as they believed that would lead to the best outcomes. We now see how naively idealistic and narrowly economically driven that approach was.
Silicon Valley is constantly lobbying against regulation, often saying that rules and bureaucracy would hold industry back and prevent crucial innovation. Is there any truth to that, or is it all talk?
Regulation is a process that can have endless different outcomes, so without context, it is an empty but very powerful phrase. We know plenty of examples where regulation has sparked innovation — think of electric cars as a result of sustainability goals. On the other hand, innovation is simply not the only consideration for lawmakers. There are other values in society that are equally important, such as the protection of fundamental rights or of national security. That means innovation may have to suffer a little bit in the interest of the common good.
What’s Europe’s relationship like with Silicon Valley at this moment after a series of first-mover tech regulations?
Many tech companies are reluctantly complying, after exhausting their lobbying efforts against the latest regulations with unprecedented budgets.
In both the run-up to the General Data Protection Regulation and the AI Act, tech companies lobbied against the laws but ultimately complied or will do so in the future.
What’s different about this moment in AI where, despite Europe’s quick movement to pass the AI Act, there are still few rules around the globe for artificial intelligence companies? Does it feel different than conversations around other powerful technologies you discuss in the book, such as social media and cryptocurrency?
I have never seen governments step up as quickly and around the world, as I have in relation to AI, and in particular the risks. Part of that may be a backlash of the late regulation of social media companies, but it is significant and incomparable to any waves of other technological breakthroughs. The challenge will be for the democratic countries to work together rather than to magnify the differences between them.
You were at the UN General Assembly in New York last week, where there was a new Pact for the Future and HLAB-AI report addressing artificial intelligence governance at the international level. Does the international community seem to understand the urgency of getting AI regulation and governance right?
The sense of urgency is great, but the sense of direction is not clear. Moreover, the EU and the US really do not want to see any global governance of AI even if that is where the UN adds most value. The EU and US prefer maximum discretion and presumably worry they would have to compromise when cooperating with partners around the world. The US has continued its typical hands-off approach to tech governance in relation to AI as well.
There is also a great need to ensure the specific needs of communities in the Global South are met. So a global effort to work together to govern AI is certainly needed.
Back to the book! What can readers expect when they pick up a copy of ”The Tech Coup?”
Readers will look at the role of tech companies through the lens of power and understand the harms to democracy if governance is not innovated and improved. They will hopefully feel the sense of urgency to address the power grab by tech companies and feel hopeful that there are solutions to rebalance the relationship between public and private interests.
Can we actually save democracy from Silicon Valley — or is it too late?
The irony is that because so little has been done to regulate tech companies, there are a series of common-sense steps that can be taken right away to ensure governments are as accountable when they use technology for governance tasks, and that outsourcing cannot be an undermining of accountability. They can also use a combination of regulatory, procurement, and investment steps to ensure tech companies are more transparent, act in the public interest, and are ultimately accountable. This applies to anything from digital infrastructure to its security, from election technologies to AI tools.
We need to treat tech the way we treat medicine: as something that can be of great value as long as it is used deliberately.
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
AI policy formation must include voices from the global South
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.
This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.
But what I really want to focus on is the role of people in the Global South, and how they're underrepresented in discussions about both what AI means in their local context and how they participate in debates around policy, if they do at all. Because right now, our focus is way too much on the three big policy blocks, China, the US and Europe.
Also because of course, a lot of industry is here around the corner in Silicon Valley. But I've learned so much from listening to people who focus on the African continent, where there are no less than 2000 languages. And, many questions about what AI will mean for those languages, for access for people beyond just the exploitative and attractive model, based on which large language models are trained with cheap labor from people in these developing countries, but also about how harms can be so different.
For example, the disinformation tends to spread with WhatsApp rather than social media platforms and that voice, through generative AI. So synthetic voice is one of the most effective ways to spread disinformation. Something that's not as prominently recognized here, where there's so much focus on text content and deepfakes videos, but not so much on audio. And then, of course, we talked about elections because there are a record number of people voting this year and disinformation around elections, tends to pick up.
And AI is really a wild card in that. So I take away that we just need to have many more conversations, not so much, about AI in the Global South and tech policy there, but listening to people who are living in those communities, researching the impact of AI in the Global South, or who are pushing for fair treatment when their governments are using the latest technologies for repression, for example.
So lots of fruitful thought. And, I was very grateful that people made it all the way over here to share their perspectives with us.
OpenAI is risk-testing Voice Engine, but the risks are clear
About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.
And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.
And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.
So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.
So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.
Should we regulate generative AI with open or closed models?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.
We just finished a half week workshop that dealt with the billion-dollar question of how to best regulate generative AI. And often this discussion tends to get quite tribal between those who say, “Well, open models are the best route to safety because they foster transparency and learning for a larger community, which also means scrutiny for things that might go wrong,” or those that say, “No, actually closed and proprietary models that can be scrutinized by a handful of companies that are able to produce them are safer because then malign actors may not get their hands on the most advanced technology.”
And one of the key takeaways that I have from this workshop, which was kindly hosted by Princeton's Institute for Advanced Studies, is actually that the question of open versus closed models, but also the question of whether or not to regulate is much more gradient. So, there is a big spectrum of considerations between models that are all the way open and what that means for safety and security,
Two models that are all the way closed and what that means for opportunities for oversight, as well as the whole discussion about whether or not to regulate and what good regulation looks like. So, one discussion that we had was, for example, how can we assess the most advanced or frontier models in a research phase with independent oversight, so government mandated, and then decide more deliberately when these new models are safe enough to be put out into the market or the wild.
So that there is actually much less of these cutting, cutting throat market dynamics that lead companies to just push out their latest models out of concern that their competitor might be faster and that there is oversight built in that really considers, first and foremost, what is important for society, for the most vulnerable, for anything from national security to election integrity, to, for example, nondiscrimination principles which are already under enormous pressure thanks to AI.
So, a lot of great takeaways to continue working on. We will hopefully publish something that I can share soon, but these were my takeaways from an intense two and a half days of AI discussions.
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
AI & human rights: Bridging a huge divide
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.
And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.
And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.
- Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- New AI toys spark privacy concerns for kids ›
- Emotional AI: More harm than good? ›
- Singapore sets an example on AI governance ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Making rules for AI … before it’s too late ›
Are leaders asking the right questions about AI?
The official theme of the 2024 World Economic Forum held recently in Davos, Switzerland, was “Rebuilding Trust” in an increasingly fragmented world. But unofficially, the hottest topic on the icy slopes was artificial intelligence.
Hundreds of private sector companies convened to pitch new products and business solutions powered by AI, and nearly two dozen panel discussions featured “AI” in their titles. There was even an “AI House” on the main promenade, just blocks from the Congress Center, where world leaders and CEOs gathered.
So, there were many conversations about the rapidly evolving technology. But were they the right ones?
GZERO’s Tony Maciulis spoke to Marietje Schaake, a former member of the EU parliament who now leads an AI policy program at Stanford. Their conversation focused on the human side of AI and what it could mean for jobs and the workforce.
A recent study from the International Monetary Fund (IMF) revealed that as many as 40% of jobs worldwide could be adversely impacted by AI. Schaake said that kind of upheaval could lead to political unrest and a further rise in populism and encouraged corporations and public sector leaders alike to find solutions now before the equality gap further widens.