Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Global researchers sign new pact to make AI a “global public good”
A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.
The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.
The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.
This is meant to foster AI as a “global public good,” as the signatories put it.
“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”
That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.
While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,
global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”
Europe adopts first “binding” treaty on AI
The Council of Europe officially opened its new artificial intelligence treaty for signatories on Sept. 5. The Council is billing its treaty – called the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law – as the “first-ever international legally binding treaty” aimed at making sure AI systems are consistent with international legal standards.
The US, UK, Vatican, Israel, and the European Union have already signed the framework. While the Council of Europe is a separate body that predates the EU, its treaty comes months after the EU passed its AI Act. The treaty has some similarities with the AI Act, including a common definition of AI, but it is functionally different.
Mina Narayanan, a research analyst at Georgetown University’s Center for Security and Emerging Technology, expressed skepticism about the new treaty’s effectiveness. She said the treaty is “light on details and reiterates provisions that have already been discussed in international fora.” That said, she found the treaty’s attempts to give some legal recourse for harm done by AI systems — including mechanisms to lodge complaints and contest decisions made by AI — somewhat novel.
But Nick Reiners, a senior geo-technology analyst at Eurasia Group, said the treaty isn’t especially binding, despite how it’s billed, since it requires parties to opt in. That’s a measure, he noted, that the UK and US lobbied for as they wanted a “lighter-touch approach.” Further, he said that carveouts from the treaty water down how strenuous it is, particularly regarding AI use for national security purposes. That makes Israel’s willingness to participate unsurprising since the treaty wouldn’t cover how it’s deploying AI in the war in Gaza.
Reiners said that despite its lack of involvement in creating this treaty, the EU would like to use it to “internationalize the AI Act,” getting companies and governments outside the continent in line with its priorities on AI.
While the treaty isn’t groundbreaking, “it shows how the Western world, in a broader sense, is continuing to expand the international rules-based framework that underpins the respect of human rights and the rule of law,” he said, “and this framework now takes account of AI.”
The Feds vs. California: Inside the twin efforts to regulate AI in the US
Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.
The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.
But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altmanwrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.
Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”
But Dean Ball, a research fellow at the libertarian think tank Mercatus Center, said he’s concerned about the opacity of these arrangements. “We do not know what level of access the federal government is being given, whether the federal government has the ability to request that model releases be delayed, and many other specific details,” Ball said. “This is not the way lawmaking is supposed to work in America; having private arrangements worked out between providers of transformative technology and the federal government is a troubling step in AI policy.”
Still, these appear to be relatively light-touch measures that counter California’s proposed approach to regulating artificial intelligence.
On Aug. 28, the state’s legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, which aims to establish “common sense safety standards” for powerful AI models. Written by California State Sen. Scott Wiener, and supported by AI pioneers like Geoffrey Hinton and Yoshua Bengio, the bill has divided Silicon Valley companies. Albrecht said that what’s been proposed by California is much closer to the European model of AI regulation — the EU’s AI Act passed in March — while Washington hasn’t yet adopted a unified view on how the technology should be regulated.
Critics of the bill include OpenAI, California’s Chamber of Commerce, and even former Speaker of the House Nancy Pelosi. “While we want California to lead in AI in a way that protects consumers, data, intellectual property, and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said in a recent statement. In a recent edition of GZERO AI, experts from the Electronic Frontier Foundation at the Atlantic Council expressed concerns about the bill’s so-called “kill switch” and how it could stifle open-source AI development.
Some industry players have been more open to the bill. Anthropic said the bill’s benefits likely outweigh its risks and Tesla CEO Elon Musk, who has an AI startup of his own called xAI, said California should “probably” pass the bill.
It’s still unclear whether Gov. Gavin Newsom will sign the bill — he has until Sept. 30 to do so. He has not signaled his view on the legislation, but in May, he warned about the risk of overregulating AI.
“I don’t want to cede this space to other states or other countries,” Newsom said at an event in San Francisco. “If we overregulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”
Antitrust is coming for AI
The US government's two antitrust regulators struck a deal to divvy up major investigations into anti-competitive behavior in the AI industry. The Justice Department will look into Nvidia’s dominance over the chip market, while the Federal Trade Commission will investigate OpenAI and its lead investor, Microsoft.
In December, the FTC opened a preliminary inquiry into Microsoft's $13 billion stake in OpenAI, which makes ChatGPT. It’s an non-traditional deal, in which Microsoft receives half of OpenAI’s revenue until the investment is repaid, rather than traditional equity. But Microsoft also flexed its muscles after the sudden ouster of OpenAI CEO Sam Altman last year, offering to hire him and any defecting OpenAI employees, effectively pressuring the company to rehire him — which it did soon after. The UK’s Competition and Markets Authority also began probing the relationship between the two firms in December.
Meanwhile, Nvidia has become the undisputed leader of the AI chip industry with their powerful graphics processors powering the training and operation of generative AI models. The company recently disclosed in a filing with the US Securities and Exchange Commission that its pole position and market dominance has attracted regulatory scrutiny from the United Kingdom, though it didn’t specify the nature of the inquiry.
Noah Daponte-Smith, a United States analyst for Eurasia Group, sees this announcement “largely as a messaging exercise intended to show that DOJ [and] FTC will be just as dogged on antitrust issues in the AI space as in the rest of the Big Tech arena.” He sees the decision as more of a continuation of Biden’s aggressive antitrust regime than a policy position on the regulation of AI.
“My sense is that AI regulation will have to occur more through Congress and through executive actions not focused on competition,” he added.
The UK is plotting to regulate AI
Policy officials in the Department for Science, Innovation and Technology have begun drafting legislation to rein in the most potent dangers from AI, sources told Bloomberg News this week. While Europe has set the standard by passing its comprehensive AI Act, Sunak has pledged to take a more hands-off approach to the technology. It’s unclear how far the forthcoming bill, which is still in its early stages, will go in setting up safeguards. Separately, the Department for Culture, Media and Sport has also proposed amending the country’s copyright law to allow companies to “opt out” of having their content scraped by generative AI firms.
Exclusive Poll: AI rules wanted, but can you trust the digital cops?
A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.
Taylor Swift controversy sparks new porn bill
After nonconsensual deepfake porn of pop singer Taylor Swift bounced around the internet in recent weeks, US lawmakers have proposed a fix.
The Disrupt Explicit Forged Images and Non-Consensual Edits Act, introduced by Democratic Sen. Dick Durbin with Republican cosponsors, would give victims of this digital abuse the right to sue for damages from anyone who “knowingly produced or possessed the digital forgery with intent to disclose it.”
Swift has reportedly considered taking legal action in light of the new images. Microsoft, meanwhile, has taken steps in response to the incident to close loopholes in its software that allowed users to make such images.The bill has bipartisan support in the Senate, but squeezing it through a legislative agenda crowded with bills on government funding, border security and Ukraine aid, there’s no clear path to a swift passage.
Biden plays big brother for AI
President Joe Biden is preparing to issue new rules to compel technology companies to inform the government when they begin building powerful artificial intelligence models.
The rules are the result of a monthslong process that began with Biden’s executive order on AI in October. Under the rules, companies will have to disclose the computing power of their models (if they exceed a certain number of FLOPs, a unit of measuring compute), who owns the training data it’s being fed, and how the developer is conducting safety testing.
Biden is using his authority under the Defense Production Act, a sweeping set of powers for the president that, he believes, gives him the authority to rein in the most powerful AI models that could pose a threat to safety or national security if not monitored closely.