Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI announces next model and new safety committee
OpenAI announced that it is training a new generative AI model to eventually replace GPT-4, the industry-standard model that powers ChatGPT and Microsoft Copilot.
But the OpenAI board of directors also said that it’s forming a new Safety and Security Committee to advise it on the risks posed by powerful AI. After the previous board of directors abruptly fired CEO Sam Altman for not being candid with them in November 2023, OpenAI staffers and lead investor Microsoft pressured the board to rehire Altman. It worked: Altman rejoined the company, and most of the old board members resigned.
OpenAI has sought to be an industry leader in generative AI while staying in the good graces of regulators looking to rein in its ambitions. OpenAI took the Biden administration’s voluntary pledge to mitigate AI risks in July 2023, and Altman recently joined the Department of Homeland Security’s new Artificial Intelligence Safety and Security Board.
The US has done little to curb the ambitions of its most prominent AI firms, but that good grace is dependent on the appearance of being a reliable and trustworthy actor — one that will propel Silicon Valley ahead of other global tech hubs while building AI that can help humanity, not harm it.
Does AI’s power problem have a nuclear solution?
Sam Altman, the co-founder and CEO of OpenAI, has broad ambitions to solve all of the problems of AI, from algorithms to high-tech chips. But there’s one more problem on his plate: energy. Altman is backing a series of companies that hope to find a way to power the revolutionary tech, literally.
One of the startups Altman invested in is called Oklo, which is building a nuclear power plant in Idaho that could eventually power energy-guzzling data centers that AI depends on, but there is no clear public timeline for the project. Google and Microsoft have also partnered with nuclear power firms for their energy needs.
Nuclear energy comes with risks, of course, and Oklo has had trouble with regulators, which rejected applications in the past based on the lack of safety and security information provided. But going nuclear — if companies like Oklo can get it right — is also a cleaner alternative to more carbon-emitting energy sources.Hard Numbers: Understanding the universe, Opening up OpenAI, Bioweapon warning, Independent review, AI media billions
100 million: AI is helping researchers better map outer space. One recent simulation led by a University College London researcher was able to show 100 million galaxies just across a quarter of the Earth’s southern hemisphere sky. This is part of a wider effort to understand dark energy, the mysterious force causing the expansion of the universe.
30,000: The law firm WilmerHale, which completed its investigation of Sam Altman’s brief December ouster from OpenAI, examined 30,000 documents as part of its review. The contents of the report haven’t been made public, but new board chairman Bret Taylor said that the review found the prior board acted in good faith but didn’t anticipate the reaction to removing Altman, who is now rejoining the board. The SEC, meanwhile, is still investigating whether OpenAI deceived investors, but it’s unclear whether WilmerHale will give their findings to the agency.
90: More than 90 scientists have pledged not to use AI to develop bioweapons as part of an agreement forged somewhat in response to congressional remarks given by Anthropic CEO Dario Amodei last year. Amodei said while the current generation of AI technology couldn’t handle such a task, it’s only two or three years away.
100: More than 100 AI researchers have signed an open letter asking the leading companies to allow independent investigators access to their models to ensure that risk assessment is thorough. “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter said.
8 billion: The media company Thomson Reuters says it has an $8 billion “war chest” to spend on AI-related acquisitions. In addition to publishing the Reuters newswire, the company sells access to services like Westlaw, a popular legal research platform. It’s also committed to spending at least $100 million developing in-house AI technology to integrate into its news and data offerings.
Sam Altman’s wish on a $7 trillion star
Sam Altman, CEO of OpenAI, needs more chips. He needs a lot more chips. The only thing stopping his $100 billion startup — if you can still call it a startup — may be the current supply of powerful chips.
The semiconductor fabrication process is notoriously slow and expensive, and the global supply chain runs through a few big, highly specialized firms. There are only a small number of companies that actually design chips made for generative AI — AMD, Intel, and Nvidia. And they’re pricy: Nvidia, which is set to take 85% of the market next year by one estimate, sells its H100 chips for about $40,000 a pop.
Naturally, Altman wants to make his own chips, but to make that dream a reality, he’s asking for an obscene amount of money.
How much does Altman want to raise?: According to the Wall Street Journal, Altman is deep in talks with investors with the goal of raising $5-7 trillion for a new chip venture.
“The dollar amount he’s reportedly trying to raise — $7 trillion — eclipses not just the semiconductor investments made by governments, including the United States’ $39 billion investment in chip manufacturing, but also the size of the entire semiconductor industry,” says Hanna Dohmen, a research analyst at Georgetown University's Center for Security and Emerging Technology. “It cannot be overstated how massive this sum of money is.”
Eurasia Group’s Director of Geotechnology Alexis Serfaty calls the sum “preposterously high and also seemingly arbitrary,” and says while it helps that OpenAI would be a built-in customer for this new chipmaker, the semiconductor industry is a difficult one with a propensity for demand gluts and supply chokepoints at every turn. Also, it would require strong leadership. “There are only so many people in the world with the expertise and experience to run an advanced fab, let alone the 300 [facilities] that $7 trillion would buy,” he adds.
Money can buy a lot — but it might not be able to solve the problems that every chipmaker already faces.
Who’s going to give him all that money? Altman has reportedly met with Masayoshi Son, CEO of the influential Japanese investment company SoftBank, and officials from Taiwan Semiconductor Manufacturing Company, one of the world’s largest chip fabrication companies, about investing in his new venture. Altman reportedly wants to “raise the money from Middle East investors and have TSMC build and run” new chip fabrication plants.
But the real eyebrow-raising potential investor isn’t in East Asia; it’s in the Middle East. In recent weeks, Altman has reportedly met with Sheikh Tahnoun bin Zayed al Nahyan, the United Arab Emirates’ security chief, to discuss the venture. OpenAI already struck a deal in October with the Emirati technology company, G42, to bring AI solutions to the Middle Eastern market, laying the foundation for additional business support from the wealthy nation.
This is going to cause geopolitical headaches, right? Almost definitely. Washington is extremely touchy about foreign investment in US companies and even more hesitant when it comes to scarce critical infrastructure such as semiconductors.
“While the US government is eager to bring chip manufacturing to the United States, it would likely be reluctant to do so with the involvement of the UAE government given existing concerns about Emirati companies’ relations with Chinese counterparts,” says Dohmen, who notes that, under US law, companies need licenses to even export certain semiconductors to the UAE.
America’s number one concern is China. Not only has the Biden administration invested heavily in the US chip industry, but it has launched a no-holds-barred campaign to prevent China from getting its hands on chips or even cloud-based AI. Over the past few years, the Biden administration has exacted stringent export controls that seek to prevent any global semiconductor technology, if it’s made with US parts, to do business with China, who it fears will use AI to supercharge its military. Dohmen adds that lawmakers are worried that G42 is already “dealing with blacklisted Chinese firms.”
Simply put, Serfaty says, “Altman’s partnerships with foreign governments could conflict with this US national security strategy.”
Could the US take action against this new venture? Yes. The US government has taken the extraordinary step to block foreign investment in chip companies. In 2018, the Trump administration blocked the sale of the US-based Qualcomm to the then-Singapore-based Broadcom, citing national security concerns. (Broadcom has since moved its headquarters to the US). That administration also blocked the sale of Lattice Semiconductor to a US private equity firm funded by Chinese capital.
Altman could be inviting antitrust scrutiny, as well. If he controls both the country’s most important generative AI company and the chip supply chain it relies upon, he’ll raise eyebrows with any antitrust regime — even if it’s not the current tech-hungry one overseen by the FTC’s Lina Khan and the DOJ’s Jonathan Kanter. The government is already starting to look into Microsoft’s $13 billion investment in OpenAI.
In short, all eyes are on OpenAI. The ChatGPT maker and its once-embattled, now-emboldened chief have their sights set on global AI domination. Whether it’s $7 trillion or far less, they’re due to make a real attempt to solve the chip problem that appears to stand in the way of true unbridled success.
Sam Altman’s chip ambitions
The chipmaking process is notoriously difficult and expensive. AI developers like OpenAI depend on powerful chips from firms like NVIDIA and AMD. Fabrication often runs through Taiwan Semiconductor or the Korean-based Samsung, the two biggest companies by market share.
With this new venture, known by the code name Tigris, Altman wants to add another major player in the chipmaking process, which has been prone to bottlenecking in recent years. The global supply chain crisis coincided with a global chip shortage, leading to low supplies of appliances, computers, cars, and video game systems. Altman is in talks to raise funds from global players including Japan’s SoftBank and the UAE’s G42, promising to make its network of fabs global in scope.
For generative AI developers, they need the most powerful chips on the market — and they need as many as they can get.
Different views: Altman's optimism vs. IMF's caution
Much of the buzz in Davos this year has been around artificial intelligence and the attendance of precocious talents like Open AI’s Sam Altman, who has helped pioneer the biggest technological breakthrough since the personal computer. The World Economic Forum’s Chief Economists Outlook suggested near unanimity in the belief that productivity gains from AI will become economically significant in the next five years in high-income economies. And Altman himself has said he is motivated to “create tech-driven prosperity.”
But there are less rosy predictions around AI. The International Monetary Fund has warned that 40% of jobs worldwide could be adversely impacted and overall inequality could worsen. In the current feverish climate, such warnings have been dismissed.
“Contemporaneous accounts of tech revolutions are always wrong,” Altman said on a panel this week with Microsoft’s Satya Nadella.
Many AI proponents talked about three-phase adoption – firstly, actively using the technology to assist workers; secondly, watching the technology in its autopilot mode to assess its accuracy; and, thirdly, letting it go and trusting it will work. Altman said the three-phase approach should make AI less scary. “This is much more of a tool than I expected. It’ll get better, but it’s not yet replacing jobs. It is this incredible tool for productivity … it lets people do their jobs better.”
In an earlier panel, he was asked about his ouster from Open AI and subsequent reinstatement and said he was “super-confused” at being fired. But he balked at talking further about the “soap opera,” rather than the prospects of AI. He likened the progression of ChatGPT to that of the iPhone, in that the iPhone 1 from 2007 is a very rudimentary device compared to the current iPhone 15. “Eventually we will have a good one,” he said.
Altman said much of the fear-mongering has been overdone. “There was a two-week freak-out with GPT-4 – that it will change everything. Now it’s like ‘Why is it so slow?’ … GPT-4 is a big deal in some sense, but it did not change the world. We are making a tool that is impressive, but humans will go on doing human things.”
AI takes center stage at Davos
Artificial intelligence is a hot topic in Davos, Switzerland, this week, as government officials and industry leaders gather for the 54th edition of the World Economic Forum summit. There are more than 30 scheduled events about AI concerning jobs, healthcare, ethics, chips, and access.
Among the most "sought-after" attendees are AI executives, including OpenAI's Sam Altman, Inflection AI's Mustafa Suleyman, Google DeepMind's Lila Ibrahim, Cohere's Aidan Gomez, and Mistral AI's Florian Bressand. Altman, who will speak about the benefits and risks of AI on Thursday, gave a recent podcast interview with Microsoft founder Bill Gates, sharing his thoughts on AI regulation.
Altman said that he's interested in the idea of a "global regulatory body that looks at those super-powerful systems" – ones far more powerful than current models like GPT-4 – and suggested that the IAEA, the nuclear regulatory model, might be a good model. "This needs a global agency of some sort because of the potential for global impact.”
The world of AI in 2024
2. Labor tensions: The acceleration of AI will continue to reshape industries, automating jobs and displacing workers. That will lead to widespread tension in various sectors of the economy. Union leaders could make AI the centerpiece of their strikes, and you might hear a lot of talk about “reskilling” workers on the lips of lawmakers heading into the 2024 election. This time it’s sure to work …
3. Copyright clarity: We don’t really know how AI models are trained, but we know they’re at least partially trained on unlicensed copyrighted material. Clarity is coming in Europe: The forthcoming AI Act mandates some transparency about training data. But in the US, where regulation is sparse, the courts are considering a big legal question about whether using copyrighted material as training data violates the law. At issue is whether the output is “transformative enough.” The answer to this legal question has extremely high stakes. Look for authors and artists to keep suing. But also look for companies, under pressure from lawmakers, to start opening up about how their systems are trained, whether copyrighted material is used, and why they think the stuff their models spit out does not constitute copyright infringement. We at GZERO aren’t holding our breath for writers' royalties (but we’d sure take ’em).
4. A big new law in Europe: The European Union’s AI Act is set to become law in the spring of 2024. Of course, lawmakers could falter before hitting the finish line, but an agreement this month made that unlikely. What’s ahead: The EU just held the first of 11 sessions to hammer out the details of the law, which will lead to a “four-column document” by February, reconciling proposals from the three EU legislative bodies. Only after that will country representatives vote to finalize the act. But this landmark law won’t have teeth in 2024 even if everything goes to plan because there’s a 12-month grace period for companies to comply. It’s all hurry up and wait.
5. The hype cycle continues: Major investment in AI won’t be a flash in the pan for 2023. With hints of lower interest rates, and still-palpable interest in AI from tech investors hungry for massive returns, expect the billion-dollar valuations, IPOs, mergers and acquisitions, and the big-moneyed investment from top tech firms in startups all to accelerate.
6. Congress does something: The US Congress does more bickering than lawmaking today. But there’s real political will to not get left behind on AI regulation. Lawmakers have been regularly discussing AI, grilling its corporate leaders, and brainstorming ideas for governance. They’ve proposed removing red tape for chipmakers, mandating disclosures for AI-generated political ads, and even considered a “light-touch” law-making AI developers self-certify for safety. It’s not necessarily likely that the US will pass something sprawling like the EU’s AI Act, but Congress will likely pass something about AI in the coming year. More than 50 different AI-related bills have been introduced since the 118th Congress began last year, but none have passed through either house of Congress.
7. Antitrust comes for AI: Regulators are circling. The US government sued Google for allegedly abusing its monopolies in search and advertising technology, Amazon for hurting competition on its e-commerce platform, and Meta for buying dominant market power through its Instagram and WhatsApp acquisitions. That’s the hallmark of current FTC Chair Lina Khan and Justice Department antitrust chief Jonathan Kanter, who have been set on enforcing antitrust law against Big Tech. And that fervor is likely to hit AI in 2024. There’s lots of political will to use antitrust law in the UK and Europe, which means scrutiny will soon come to AI. In fact, it’s already here. The FTC and the UK’s Competition and Markets Authority are reportedly probing Microsoft’s investment into OpenAI – it’s not a full-fledged investigation yet, but in 2024 antitrust regulators will be watching AI very closely.
8. Election problems: In 2024, an unprecedented number of countries – some 40-plus – will head to the polls, and many will have their eyes on places like the United States and India for the use of AI in disinformation campaigns ahead of Election Day. There is concern about deepfake technology fueling confusion or contributing to an already-challenging misinformation problem. We’ve already seen deepfake songs impersonating Indian Prime Minister Narendra Modi and videos portraying US President Joe Biden. But what we haven’t seen yet is AI disrupting an election. Will 2024 be the year that AI-generated words, videos, images, and music play a surprising role in elections?
9. New companies you’ve never heard of. By the end of 2024, the top companies in AI may be the same as today: Anthropic, Google, Meta, Microsoft, and OpenAI. But chances are there will be a startup that you've never heard of on the list. Why? Not only is innovation an everyday reality in AI, but investors are excited to fund these projects to reap potential rewards. In the first half of 2023, AI's share of total startup funding in the US more than doubled from 11% to 26% compared to the same period in 2022. That includes household names and challengers you might have already heard of, such as OpenAI ($29 billion) and Anthropic ($5 billion), which had big funding rounds this year. But there are 15 new AI "unicorns" (billion-dollar companies) that could break into the mainstream, including the enterprise AI firm Cohere ($2.2 billion) and the research lab Imbue ($1 billion). Even in a high-interest rate environment, AI startups have fetched big valuations despite still-paltry revenue estimates — at a time when “easy money” has vanished from the broader tech sector. Expecting stasis would be foolish.
10. The real reason Sam Altman was fired: Expect to learn why OpenAI really fired Sam Altman in 2024. It’s perhaps the great mystery in AI, but it can’t remain a secret forever. If anyone knows the answer, please let us know.