OK, Doomer

Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.
Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.
Kirsty Wigglesworth/Pool via REUTERS

British PM Rishi Sunak hosted several world leaders, including UN Secretary-General António Guterres and US Vice President Kamala Harris, at last week’s AI Summit. But the biggest celebrity draw was his sit-down interview with billionaire Elon Musk — among the world’s richest men and the controlling force behind Tesla, SpaceX, and X, formerly known as Twitter.

Musk has long played it both ways on AI — he frequently warns of its “civilizational risks” while investing in the technology himself. Musk’s new AI company, xAI, notably released its first model to a group of testers this past weekend. (We don’t know much about xAI’s Grok yet, but Musk boasts that it has access to Twitter data and will “answer spicy questions that are rejected by most other AI systems.”)

Musk told Sunak he thinks AI will primarily be a “force for good” while at the same time warning that AI could be “the most destructive force in history.”

There’s a central tension in tech regulation ... between protecting against doomsday scenarios like the development of an all-powerful AI or one that causes nuclear destruction and the clear-and-present challenges confronting people now, such as algorithmic discrimination in hiring. Of course, regulators can try to solve both, but some critics have expressed consternation that too much time and energy is being spent catering to long-term threats while ignoring the dangers right in front of our faces.

In fact, one of the focal points of the Bletchley Declaration, last week’s agreement brokered by Sunak and signed by 28 countries including the US and China, is the potential for “catastrophic harm” caused by AI. Even US President Joe Biden — whose executive order did more to tackle the immediate challenges of AI than the UK-brokered declaration did — said he became much more concerned about AI after watching the latest “Mission Impossible” film, which features a murderous AI.

The thing is, the two sets of concerns – coming catastrophe vs. today’s problems – are not mutually exclusive. MIT professor Max Tegmark recently said that the people focused on looming catastrophe need to speed up their thinking a bit. “Those who are concerned about existential risks, loss of control, things like that, realize that to do something about it, they have to support those who are warning about immediate harms … to get them as allies to start putting safety standards in place.”

More from GZERO Media

President Donald Trump and Indian Prime Minister Narendra Modi shake hands as they attend a joint press conference at the White House in Washington, DC, on Feb. 13, 2025.
REUTERS/Kevin Lamarque

As promised, US President Donald Trump announced reciprocal tariffs on all American trading partners Thursday afternoon. Each country will be assessed individually, factoring in value-added taxes, foreign tariff rates, industry subsidies, regulations, and currency undervaluation to determine customized duty rates. Trump claimed, “It’s gonna make our country a fortune.”

Linda McMahon testifies before the Senate Health, Education, and Labor Committee during a nomination hearing as Secretary of Education in Washington, DC, USA, on Feb. 13, 2025.

Lenin Nolly/NurPhoto via Reuters

Linda McMahon, the former CEO of World Wrestling Entertainment, on Thursday began her Senate confirmation hearing to run the Department of Education, which Donald Trump and the Department of Government Efficiency have vowed to shrink or shut down.

Join us via free livestream at the Energy Security Hub at BMW Pavilion Herbert Quandt at the Munich Security Conference and watch our panel on “Geopolitics of Energy Transition and Hydrogen Trade” in cooperation with the German Federal Office and H2-Diplo. The global shift to net zero is no longer just an environmental imperative – it’s reshaping international security and geo-economic dynamics. As new clean energy trade routes emerge, major economies are jockeying for clean industry leadership, navigating critical resource dependencies, supply chain resilience, and infrastructure security. Following this panel, starting at 18:30 (CET) / 12:30 (ET), don’t miss the opportunity to watch the closing keynote by William Chueh, director of Precourt Institute for Energy and associate professor of Materials Science and Engineering, Stanford University, on “Energy Transition: Speed & Scale.” For these and other forward-thinking panels and discussions in the next two days, register here.