Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
If appointed, this person would be the White House official tasked with coordinating the federal government’s use of the emerging technology and its policies toward it. And while the role will not go to Elon Musk, the billionaire tech CEO who has been named to run a government efficiency commission for Trump, he will have input as to who gets the job.
The Trump administration has promised a deregulatory attitude toward artificial intelligence, including undoing President Joe Biden’s 2023 executive order on AI.
That order not only tasked the federal departments and agencies with evaluating how to regulate the technology given their statutory authority but also how to use it to further their own goals. Under Biden, each agency was tasked with naming a chief AI officer. If Trump is to keep those positions, the White House AI czar would likely coordinate with these officials across the executive branch.The leading AI startups and Big Tech incumbents are striving to make massive technological strides, but they still have an open question to answer: Can they actually make money?
What’s the business model for AI?
The Big Tech outlook: The largest technology companies — such as Amazon, Apple, Google, Meta, and Microsoft — are all-in on artificial intelligence. Many of them have their own large language models, such as Google’s Gemini 1.5 Pro and Meta’s open-source Llama 3. For these companies, betting on AI is one of many bets: Google is still a search engine and an advertising company, Meta is still a social media giant, but they’re betting that they can make strides in developing this breakthrough technology. Depending on how you look at it, these AI plays are either loss leaders or long-term investments to keep these companies on the cutting edge and boost their existing product lines.
And what the Silicon Valley giants can’t do themselves, they’re happy to outsource to specialized startups. For example, Amazon invested $8 billion into Anthropic, and Microsoft poured $13 billion into OpenAI. While Microsoft develops its own AI models, it has also integrated OpenAI into its Copilot assistant across its enterprise suite of products.
The startups: But without other products to cross-subsidize their ambitions, the smaller pure-play AI startups are left hunting for rock-solid business models.
OpenAI, which makes ChatGPT, lets users test out the chatbot for free but sells subscriptions for their most advanced tools and higher usage rates for $20 a month. The company also sells enterprise subscriptions to companies — reportedly about $60 per user each month for companies with 150 employees or more, which comes out to $108,000 per year for a company of that size. And it’s found some success with more than 1 million paid users of the enterprise version of ChatGPT — about $720 million in revenue. Other AI startups, such as Anthropic (which makes Claude), the search engine Perplexity, the image generator Midjourney, and the music generator Suno, have similar freemium models with bigger checks coming from business-to-business sales.
“The real money will be in business-to-business AI solutions provided they’re carefully deployed securely — something that the likes of Salesforce and Microsoft are promising,” said Gadjo Sevilla, senior analyst at the market research firm eMarketer. “This is easy for companies with large captive user bases since AI features will be an incremental cost to existing services and are also scalable across enterprises.”
The open-sourcers: While most AI companies have proprietary (or closed-source) models, a few have opted for open-source development, whereby they publish their code for free for developers to use and adapt it. Stability AI, which makes the open-source image generator Stable Diffusion, lets people use its model for free but charges companies that make $1 million or more in annual revenue for commercial licenses and support. That’s a monetization strategy that Meta could pursue in the future for its currently free and open-source Llama models.
The government option: AI companies have a third source of revenue beyond consumers and businesses: governments. OpenAI has secured contracts with US government agencies and public institutions as varied as NASA, the National Gallery of Art, the IRS, and Los Alamos National Laboratory, according to FedScoop. Microsoft, meanwhile, has AI deals with both the US and UK governments. And specialized firms like Palantir and Anduril have capitalized on US defense contracts with their AI technology for battlefields.
Running at a loss
OpenAI is currently valued at $157 billion, but the company behind the ChatGPT chatbot is still losing money. In September, the New York Times reported that OpenAI expects to make $3.7 billion in 2024, but it’s set to spend $5 billion in the process — a net loss of $1.3 billion.
The company’s internal projections estimate that revenues will hit $11.6 billion in 2025, but it will need to keep its costs — on training its models, running its services, and paying employees — stable to turn a profit. Meanwhile, Anthropic is reportedly burning through $2.7 billion this year. These companies’ top costs are computing infrastructure such as servers and chips, staffing with top talent, and the cost of offering free services to casual users.
To become profitable, these companies must lower costs, raise prices, or develop in-house capabilities like chips and data centers to reduce reliance on paying other firms.
Playing the long game
Perhaps AI startups need to think like the tech giants and play the long game. After all, these many billions of dollars in funding should give them some runway.
Sevilla said that OpenAI is headed in the right direction. “OpenAI shifting from nonprofit think tank to a for-profit AI innovator now behooves the company to generate sustainable profits,” he said, referring to the company’s recent change in ownership structure. “It’s challenging Google in search and browsers, it's trying to make inroads into education, and there's a good chance it will develop its own hardware to reduce reliance on Nvidia. Any of these areas can generate profits, but it could take time.”
Dev Saxena, director of Eurasia Group’s geo-technology practice, said the real value lies in building platforms that other companies will use to develop their own AI applications — “the same way that the internet unleashed so much entrepreneurialism and innovation."
In other words: The winners of the AI race might not be the companies with the most advanced AI but those who build the infrastructure and platforms other businesses need — and those who find a way to make money doing it.
The US government under President Joe Biden has imposed significant export controls not only on US-made chips but also on semiconductor manufacturing equipment necessary for Huawei to mass produce its own chip designs. US rules have largely cut Huawei off from the most powerful machines made by Dutch lithography company ASML, which essentially makes stencils to imprint miniature designs on chips for mass manufacturing, and TSMC, the world’s largest contract chipmaker. (The US Commerce Department is investigating how Huawei chips recently ended up on TSMC assembly lines.) Instead, Huawei relies on the Chinese chip manufacturer SMIC, which uses less powerful models of ASML machines.
But despite Huawei’s ambitions, Reuters reports that the company has been struggling with these restrictions to make effective chips at scale. For the Ascend 910C, the yield rate — the percentage that comes off manufacturing lines fully functional — is reportedly only 20%, while experts say a 70% yield rate is needed to be commercially viable. China’s top chip designer will need to make a breakthrough with limited resources to make good on its public promises to compete with Nvidia.According to a CNBC analysis of US government data, a single data center operating at 85% capacity consumes as much electricity as 710,000 households or 1.8 million people. There are currently 3,000 data centers across the US, by one estimate, with the greatest number in Virginia (477), Texas (291), and California (285). With artificial intelligence as a leading factor, power demand from data centers is expected to increase 160% by 2030, according to Goldman Sachs. And major tech companies such as Google and Microsoft have revised their environmental goals because of their AI ambitions.
The incoming Donald Trump administration promises to take a deregulatory approach across the board. Lee Zeldin, a former congressman Trump has tapped to lead the Environmental Protection Agency, said he wants to “make America the AI capital of the world.”
So all signals point to Biden’s climate goals, soon in the rearview mirror, slipping further out of grasp.
Amazon is working on the third generation of its AI chips, called the Trainium2, which industry insiders told Bloomberg was a “make-or-break moment” for the company’s chip ambitions.
Luckily, they already have one important customer’s buy-in: Anthropic, which makes the chatbot Claude. On Nov. 22, Amazon announced it’s investing another $4 billion into Anthropic, doubling its total investment to $8 billion. As part of the deal, the Claude maker — perhaps the main rival to OpenAI — will continue to use Amazon’s Trainium series of chips. Amazon makes and invests in AI software and has the cloud infrastructure needed for AI – so if it can conquer the chip industry and produce chips comparable to the top models from Nvidia, it could become a dominant player in artificial intelligence.US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
Hard Numbers: Hacks galore, Hollywood dreams, US on top, Pokémon Go scan the world
3 million: Evidently, there’s hot demand for AI-related scripts in Hollywood. A thriller about artificial intelligence from a relatively unknown screenwriter named Natan Dotan just sold for $1.25 million, a number that will rise to $3 million if it’s turned into a film. Despite Hollywood’s perennial discomfort with AI infiltrating the film industry, maybe all of the hubbub has got them thinking that audiences will turn out for a good old-fashioned AI thriller.
36: Stanford researchers analyzed the AI capabilities of 36 countries and determined that the US significantly leads in most of the 42 areas studied — including research, private investment, and notable machine learning models. China, which leads in patents and journal citations, came in second, followed by the UK, India, and the UAE.
10 million: Niantic, the developer of the augmented reality game Pokémon Go, announced that it’s building an AI model to make a 3D world map using location data submitted by the game’s users. The company said it already has 10 million scanned locations.