Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith
At the Munich Security Conference, leading tech companies unveiled a new accord that committed them to combating AI-generated content that could disrupt elections.
During a Global Stage panel on the sidelines of this year’s conference, Microsoft Vice Chair and President Brad Smith said the accord would not completely solve the problem of deceptive AI content but would help “manage this new reality in a way that will make a difference and really serve all of the elections… between now and the end of the year.”
As Smith explains, the accord is designed to bring the tech industry together to preserve the “authenticity of content,” including via the creation of content credentials. The industry will also work to detect deepfakes and provide candidates with a mechanism to report them, says Smith, while also taking steps to “promote transparency and education.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How AI threatens elections ›
- How to protect elections in the age of AI ›
- AI & election security - GZERO Media ›
We’re Sora-ing, flying
OpenAI, the buzzy startup behind the ChatGPT chatbot, has begun previewing its next tool: Sora. Just like OpenAI’s DALL-E allows users to type out a text prompt and generate an image, Sora will give customers the same ability with video.
Want a cinematic clip of dinosaurs walking through Central Park? Sure. How about kangaroos hopping around Mars? Why not? These are the kinds of imaginative things that Sora can theoretically generate with just a short prompt. The software has only been tested by a select group of people, and the reviews so far are mixed. It’s groundbreaking but often struggles with things like scale and glitchiness.
AI-generated images have already posed serious problems, including the spread of photorealistic deep fake pornography and convincing-but-fake political images. (For example, Florida Gov. Ron DeSantis’ presidential campaign used AI-generated images of former President Donald Trump hugging Anthony Fauci in a video, and the Republican National Committee did something similar with fake images of Joe Biden.)
While users may not yet have access to movie-quality video generators, they soon might — something that’ll almost certainly supercharge the issues presented by AI-generated images. The World Economic Forum recently named disinformation, especially that caused by artificial intelligence, as the biggest global short-term risk. “Misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years,” according to the WEF. “A growing distrust of information, as well as media and governments as sources, will deepen polarized views – a vicious cycle that could trigger civil unrest and possibly confrontation.”
Eurasia Group, GZERO’s parent company, also named “Ungoverned AI” as one of its Top Risks for 2024. “In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors — notably Russia — to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale,” according to the report. “A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.”