Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Global researchers sign new pact to make AI a “global public good”
A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.
The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.
The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.
This is meant to foster AI as a “global public good,” as the signatories put it.
“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”
That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.
While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,
global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”
Europe adopts first “binding” treaty on AI
The Council of Europe officially opened its new artificial intelligence treaty for signatories on Sept. 5. The Council is billing its treaty – called the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law – as the “first-ever international legally binding treaty” aimed at making sure AI systems are consistent with international legal standards.
The US, UK, Vatican, Israel, and the European Union have already signed the framework. While the Council of Europe is a separate body that predates the EU, its treaty comes months after the EU passed its AI Act. The treaty has some similarities with the AI Act, including a common definition of AI, but it is functionally different.
Mina Narayanan, a research analyst at Georgetown University’s Center for Security and Emerging Technology, expressed skepticism about the new treaty’s effectiveness. She said the treaty is “light on details and reiterates provisions that have already been discussed in international fora.” That said, she found the treaty’s attempts to give some legal recourse for harm done by AI systems — including mechanisms to lodge complaints and contest decisions made by AI — somewhat novel.
But Nick Reiners, a senior geo-technology analyst at Eurasia Group, said the treaty isn’t especially binding, despite how it’s billed, since it requires parties to opt in. That’s a measure, he noted, that the UK and US lobbied for as they wanted a “lighter-touch approach.” Further, he said that carveouts from the treaty water down how strenuous it is, particularly regarding AI use for national security purposes. That makes Israel’s willingness to participate unsurprising since the treaty wouldn’t cover how it’s deploying AI in the war in Gaza.
Reiners said that despite its lack of involvement in creating this treaty, the EU would like to use it to “internationalize the AI Act,” getting companies and governments outside the continent in line with its priorities on AI.
While the treaty isn’t groundbreaking, “it shows how the Western world, in a broader sense, is continuing to expand the international rules-based framework that underpins the respect of human rights and the rule of law,” he said, “and this framework now takes account of AI.”
Oh BTW, OpenAI got hacked and didn’t tell us
A hacker breached an OpenAI employee forum in 2023 and gained access to internal secrets, according to a New York Times report published Thursday. The company, which makes ChatGPT, told employees but never went public with the disclosure. Employees voiced concerns that OpenAI wasn’t taking enough precautions to safeguard sensitive data — and if this hacker, a private individual, could breach their systems, then so could foreign adversaries like China.
Artificial intelligence companies have treasure troves of data — some more sensitive than others. They collect training data (the inputs on which models learn) and user data (how individuals interact with applications), but also have trade secrets that they want to keep away from hackers, rival companies, and foreign governments seeking their own competitive advantage.
The US is trying hard to limit access to this valuable data, as well as the chip technology that powers training, to friendly countries, and has enacted export controls against China. If lax security at private companies means Beijing can just pilfer the data it needs, Washington will need to modify its approach.
Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM
8.5 billion: Rising energy usage from AI data centers could lead to additional demand for natural gas of up to 8.5 billion cubic feet per day, according to an investment bank estimate. Generative AI requires high energy and water demands to power and cool expansive data centers, which climate advocates have warned could exacerbate climate change.
32 billion: Google is pouring $3 billion into data center projects to power its AI system. That budget includes $2 billion for a new data center in Fort Wayne, Ind., and $1 billion to expand three existing ones in Virginia. In earnings reports this week, Google, Meta, and Microsoft disclosed that they had spent $32 billion on data centers and related capital expenditures in the first quarter alone.
22: The US Department of Homeland Security announced a new Artificial Intelligence Safety and Security Board with 22 members including the CEOs of Alphabet (Sundar Pichai), Anthropic (Dario Amodei), OpenAI (Sam Altman), Microsoft (Satya Nadella), and Nvidia (Jensen Huang). The goal: to advise Secretary Alejandro Mayorkas on “safe and secure development and deployment of AI technology in our nation’s critical infrastructure.”
960 million: SoftBank, the Japanese technology conglomerate, plans to pour $960 million to upgrade its computing facilities in the next two years in order to boost its AI capabilities. The company’s broad ambitions include funding and developing a large language model that’s “world-class” and geared specifically toward the Japanese language.TikTok videos go silent amid deafening calls for safety guardrails
It's time for TikTokers to enter their miming era. Countless videos suddenly went silent as music from top stars like Drake and Taylor Swift disappeared from the popular app on Thursday. The culprit? Universal Music Group – the world’s largest record company – could not secure a new licensing deal with the powerful information-sharing video platform.
In an open letter published by UMG, it blamed TikTok for “trying to build a music-based business, without paying fair value for the music.” UMG claimed TikTok “responded first with indifference, and then with intimidation” after being pressured not only on artist royalties, but also restrictions about AI-generated content, and a push for user safety.
It’s been a rough week for CEO Shou Zi Chew. He joined CEOs from Meta, X, and Discord for a grilling on Capitol Hill this week over the dangers of abuse and exploitation children are facing on their platforms. Sen. Lindsey Graham went so far as to say these companies have “blood on their hands.” The hearing followed last year’s public health advisory released by the Surgeon General that argued social media presents “a risk of harm” to youth mental health and called for “urgent action” from these companies.
The big takeaway: It appears social media companies are quite agile when under pressure and can change the user experience for billions of people at the drop of a hat, especially when profit margins are involved. Imagine what these companies could do if they put that energy into the health of their users instead.The Graphic Truth: UN personnel in peril
In just one month, the fighting in Gaza has claimed more UN aid workers' lives than any previous conflict. Since Oct. 7, at least 89 UNRWA personnel, the major UN humanitarian aid force in the region, have been killed. In total, 131 UN aid workers have died in the Gaza Strip in 2023. UN leaders are calling for an immediate ceasefire and expansion of humanitarian access to Gaza, emphasizing the need to protect civilians and vital infrastructure and to ensure the safe and swift delivery of essential aid.
But Israel remains unswayed by their calls and mounting international pressure for a ceasefire, saying hostages taken by Hamas militants should be released first.Sen. Chris Coons on returning to offices in pandemic: OSHA is “AWOL”
In a blistering response to questions about federal workers being asked to return to offices as COVID cases climb around the U.S., Sen. Chris Coons (D-DE) says not enough prep work has been done to establish clear and consistent standards for safe workplaces. OSHA, the Occupational Safety and Health Administration, has been "AWOL" on the matter, Sen. Coons tells Ian Bremmer. "They have refused to issue an emergency standard for the return to work, which they could, and which would give both employers and employees a standard that they can look to for guidance about when and how it's safe to return to work," he said in an interview for GZERO World.