Search
AI-powered search, human-powered content.
scroll to top arrow or icon

Global researchers sign new pact to make AI a “global public good”

​James Manyika, SVP of Research, Technology and Society at Google, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.

James Manyika, SVP of Research, Technology and Society at Google, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.

REUTERS/Carlos Barria
Contributing Writer
https://x.com/ScottNover
https://www.linkedin.com/in/scottnover/

A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement — the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence — at the United Nations General Assembly in New York on Thursday, Sept. 26.


The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.

The Manhattan Declaration, which shares some signatories with the HLAB-AI group — including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson — is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.

This is meant to foster AI as a “global public good,” as the signatories put it.

“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”

That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can — if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.

While other AI treaties, agreements, and declarations — such as the UK’s Bletchley Declaration signed last year — include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,

global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”