Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI in 2024: Will democracy be disrupted?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she shares her reflection on AI in 2023.
Hello, this is GZERO AI. My name is Marietje Schaake. It's the end of the year, and so it's the time for lists. As we see so many top fives, top threes, top tens of the key developments in AI, I thought I would just share a couple of reflections. Not list them, just look back on this year, which was remarkable in so many ways.
We saw a huge explosion of discussion around AI governance. Are companies, the ones that can take on all this responsibility of assessing risk, or deciding when to push new research onto the market, or as illustrated by the dramatic saga at OpenAI, are companies not in a good position to make all these decisions themselves and to sort of design checks and balances all in-house? Governments agree. I don't think they want to let these decisions to the big companies, and so they are really stepping up across the board and across the globe. We've only recently, in the last days of this year, seen the political agreement around the EU AI Act, a landmark law that will really set a standard in the democratic world for governing AI in a binding fashion. But there were also a lot of voluntary code of conduct, as we saw at the G7, statements that came out of the AI Safety Summit like the Bletchley Park Declaration, and there was the White House's executive order to add to the many initiatives that were taken in an attempt to make sure that AI developments at least respect the laws that are on the book, if not make new ones where needed.
Now, what I thought was missing quite a bit, looking at the AI Safety Summit, for example, but also in discussions in my home country, the Netherlands, there were elections where AI did not feature at all in the political debate. Is a better discussion, more informed, and more anticipatory about job displacement? I think it is potentially a most devastating and most disruptive development, and yet we don't really hear so much about it short of reports by consulting firms that predict macroeconomic benefits over the long run. But if you look at the political fallout of job displacement and the need to have resources, for example, to reskill and retrain people. There is a need for a much more public debate and maybe even to start talking about the T-word, namely taxing AI companies.
What I also think is missing still, despite having had more reference to the Global South, is true engagement of people from all over the world, not just from the most advanced economies, but really, to have a global engagement with people to understand their lived experiences and needs with regard to the rollout of AI. Because even if people do not have agency over what AI decides about them, there will still be impact even if people are not even online yet. So I think it is incredibly important to have a more global, inclusive, and equal discussion with people from all over the world, and that will be something I'll be looking out for the year 2024.
What I also think is missing still, despite having had more reference to the Global South, is true engagement of people from all over the world, not just from the most advanced economies, but really, to have a global engagement with people to understand their lived experiences and needs with regard to the rollout of AI. Because even if people do not have agency over what AI decides about them, there will still be impact even if people are not even online yet. So I think it is incredibly important to have a more global, inclusive, and equal discussion with people from all over the world, and that will be something I'll be looking out for the year 2024.
And then last, and certainly not least, 2024 has been called the Year of Democracy. I hope we will say the same when we look back a year from now. There will be an unprecedented amount of people going to the polls, and there are still a lot of question marks about how disruptive AI is going to be for the public debate, the political debate, new means of manipulating, sharing disinformation with synthetic media that is really, really hard to distinguish from authentic human-uttered expressions. Really, the combination of AI and elections, AI and democracy deserves a lot more attention and will probably draw attention in the year where billions of people will take to the polls, 2024.
For now, let me wish you a happy holiday season with friends and few screens, I hope. And we will see each other again afresh in the new year. Happy New Year and happy holidays.
- AI explosion, elections, and wars: What to expect in 2024 ›
- The world of AI in 2024 ›
- ChatGPT and the 2024 US election ›
- How AI threatens elections ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How to protect elections in the age of AI - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›
- AI's potential to impact election is cause for concern - EU's Eva Maydell - GZERO Media ›
UK AI Safety Summit brings government leaders and AI experts together
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she takes you behind the scenes of the first-ever UK AI Safety Summit.
Last week, the AI Summit took place, and I'm sure you've read all the headlines, but I thought it would be fun to also take you behind the scenes a little bit. So I arrived early in the morning of the day that the summit started, and everybody was made to go through security between 7 and 8 AM, so pretty early, and the program only started at 10:30. So what that led to was a longstanding reception over coffee where old friends and colleagues met, new people were introduced, and all participants from business, government, civil society, academia really started to mingle.
And maybe that was a part of the success of the summit, which then started with a formal opening with remarkably global representation. There had been some discussion about whether it was appropriate to invite the Chinese government, but indeed a Chinese minister, but also from India, from Nigeria, were there to underline that the challenges that governments have to deal with around artificial intelligence are a global one. And I think that that was an important symbol that the UK government sought to underline. Now, there was a little bit of surprise in the opening when Secretary Raimondo of the United States announced the US would also initiate an AI Safety Institute right after the UK government had announced its. And so it did make me wonder why not just work together globally? But I guess they each want their own institute.
And those were perhaps the more concrete, tangible outcomes of the conference. Other than that, it was more a statement to look into the risks of AI safety more. And ahead of the conference, there had been a lot of discussion about whether the UK government was taking a too-narrow focus on AI safety, whether they had been leaning towards the effective altruism, existential risk camp too much. But in practice, the program gave a lot of room for discussions, and I thought that was really important, about the known and current day risks that AI presents. For example, to civil rights, when we think about discrimination, or to human rights, when we think about the threats to democracy, from both disinformation that generative AI can put on steroids, but also the real question of how to govern it at all when companies have so much power, when there's such a lack of transparency. So civil society leaders that were worried that they were not sufficiently heard in the program will hopefully feel a little bit more reassured because I spoke to a wide variety of civil society representatives that were a key part of the participants among government, business, and academic leaders.
So, when I talked to some of the first generation of thinkers and researchers in the field of AI, for them it was a significant moment because never had they thought that they would be part of a summit next to government leaders. I mean, for a long time they were mostly in their labs researching AI, and suddenly here they were being listened to at the podium alongside government representatives. So in a way, they were a little bit starstruck, and I thought that was funny because it was probably the same the other way around, certainly for the Prime Minister, who really looked like a proud student when he was interviewing Elon Musk. And that was another surprising development, that actually briefly, after the press conference had taken place, so a moment to shine in the media with the outcomes of the summit, Prime Minister Sunak decided to spend the airtime and certainly the social media coverage interviewing Elon Musk, who then predicted that AI would eradicate lots and lots of jobs. And remarkably, that was a topic that barely got mentioned at the summit, so maybe it was a good thing that it got part of the discussion after all, albeit in an unusual way.
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- Elon Musk's geopolitical clout grows as he meets Modi ›
- Everybody wants to regulate AI ›
- Governing AI Before It’s Too Late ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? ›
- The geopolitics of AI ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
Rishi Sunak's first-ever UK AI Safety Summit: What to expect
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she previews what to expect from the UK's upcoming AI Summit.
This week takes us to the AI Summit in the UK. This is a little preview, so I don't know yet what the results of the summit will be. But what we can say is that it's a prestige project for the Sunak government. It's been hastily put together. I think a month ago, the invitation wasn't even in my inbox yet, and the government is looking to see what it can deliver, short of calling for regulation, because that is definitely something it wants to stay away from. We've seen speculation that it will come up with something like the IPCC for AI, modeled after the successful Intergovernmental Panel on Climate Change to do something where existing research might be compared, and lessons can be learned about, again, the narrow focus on safety of AI.
I'll be looking specifically at how representative the attendance of the summit will be. In the past, we've seen governments leaning very heavily on bringing CEOs into meetings to talk about AI, but I think it's very important to have multi-disciplinary, multi-stakeholder groups of people thinking about the future of AI, civil society representatives, academics with different views, and other experts that can speak to the lived experience with AI in the here and now. Because frankly, it's not only catastrophic risk that people should be concerned about, but much more present and clear problems that AI causes now.
Discrimination and bias are well-known problems of AI systems, but they're also very harmful for the environment, and we don't talk about that enough. There are concerns about antitrust in AI context, and of course the ease with which disinformation and manipulation can be convincing through synthetic media and how that will harm democracy. I hope that there will be a focus on that as well.
We do know that for the UK, getting investments, getting companies to settle in the United Kingdom to be welcoming to the economic development of AI is important, especially after Brexit. The country has tried to say to the world it's open for business, even if its economy is not doing too well. And as such, it is kind of going against the current of, for example, the EU, which is focusing on hard regulation, a comprehensive AI law in the EU AI Act. But even today, we saw an executive order announced by the White House on AI and a code of conduct presented by the G7.
So, it almost looks as if there are a lot of people who want to steal the UK's thunder, but it's too early to tell. The summit is still to take place, and of course, we will keep you posted on how that goes.
More from GZERO AI: https://www.gzeromedia.com/gzero-ai/
- A vision for inclusive AI governance ›
- The geopolitics of AI ›
- GZERO AI launches October 31st ›
- How AI will roil politics even if it creates more jobs ›
- Regulating AI: The urgent need for global safeguards ›
- Can data and AI save lives and make the world safer? ›
- What does the UK’s Sunak want from Biden? ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- Should AI content be protected as free speech? - GZERO Media ›
- Is the EU's landmark AI bill doomed? - GZERO Media ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›