Analysis

Global researchers sign new pact to make AI a “global public good”

​James Manyika, SVP of Research, Technology and Society at Google, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.
James Manyika, SVP of Research, Technology and Society at Google, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.
REUTERS/Carlos Barria

A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.

The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.

The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.

This is meant to foster AI as a “global public good,” as the signatories put it.

“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”

That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.

While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,

global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”

More For You

Cybercrime is no longer just an IT issue – it’s an economic one. New research from the Mastercard Economics Institute shows how digital attacks can disrupt supply chains, shift consumer behavior, and ripple through GDP. After ransomware attacks on Asahi Group and Colonial Pipeline, anonymized spending data revealed stockpiling, shortages, and sustained shifts in purchasing patterns. As threats grow more sophisticated, strengthening cyber resilience and public-private collaboration will be critical to economic stability. Read the full analysis here.

A French navy boat surrounds the GRINCH oil tanker, intercepted by France in the Alboran Sea on suspicion of operating under a false flag and belonging to Russia's shadow fleet that enables Russia to export oil despite sanctions, and diverted to the port of Marseille-Fos, in the Gulf of Fos-sur-Mer, near Martigues, France, on January 25, 2026.
REUTERS/Manon Cruz

$90 billion: The amount of revenue that Russia has reportedly made from smuggled crude oil exports, after 48 companies worked together to help disguise the origin of the oil and circumvent sanctions that have been imposed since the full-scale war on Ukraine began.

People in support of former South Korean President Yoon Suk Yeol rally near Seoul Central District Court in Seoul on Feb. 19, 2026. The court sentenced him to life imprisonment the same day for leading an insurrection with his short-lived declaration of martial law in December 2024.

Kyodo

65: The age of former South Korean President Yoon Suk Yeol, who was sentenced to life in prison on Thursday after being found guilty of plotting an insurrection when he declared martial law in 2024.