Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
2023: The Year of AI
Art: Courtesy of Midjourney
The Trends
1. Chatbot mania: OpenAI brought AI to the masses with ChatGPT. Though it debuted in late 2022, it truly hit its stride this year, especially when it started charging $20 a month in February for access to its latest and greatest version, which was then upgraded with GPT-4 in March. Google also released Bard, Microsoft launched Bing Chat, and the startup Anthropic introduced us to Claude. Each chatbot has its strength: While ChatGPT is strong on creative writing and inductive reasoning, Bing is best used as a replacement for internet search engines, and Bard’s latest upgrade – to its new language model Gemini – strives for commonsense reasoning and logic. Anthropic's Claude rivals ChatGPT for complex tasks like organizing huge chunks of text. For now, ChatGPT is top dog, but the younger pups are nipping at its heels.
2. Regulators ready their lassos: Following years of debate, the European Union finally reached an agreement in December on the scope of its landmark AI Act, the first major regulation for AI models. Next door, the United Kingdom has proceeded with a hands-off approach, more concerned with courting AI firms than reining them in. Rishi Sunak’s Bletchley summit, which produced a voluntary agreement on AI safety, was a political winner for the PM. The US, by contrast, falls somewhere between the UK and Europe in its approach: Months after President Joe Biden secured voluntary commitments from major AI firms to stave off the worst risks from AI, he issued an executive order to start codifying those protections. There’s no forceful regulation on the books yet — but the wheels are finally in motion.
3. The chip race heats up: AI models are nothing without the semiconductors, aka chips, that power them. Making them, however, is difficult and expensive, and there’s always some kind of holdup. The most powerful AI relies on the most powerful graphics chips, like those produced by NVIDIA and AMD. Recently, OpenAI had to halt new signups for the paid version of ChatGPT for a month because it didn’t have enough graphics chips to accommodate new users. The US, fearful of China catching up technologically and using AI for military purposes, has placed strict export controls on the flow of US-made chips, rules that were tightened this fall. For now, the US maintains its major advantage in the chip wars.
The Moments
4. Puffer pontiff: Who says the pope can’t sport a bit of bling? In March, a photo of Pope Francis wearing a long white Balenciaga puffer coat (sells for $4,350), complete with an oversized crucifix necklace, went mega-viral. It was an outfit more befitting of a rap god than the bishop of Rome, and the fake image of the athleisure pope became a seminal example of the ways generative AI can fool people. It was ultimately harmless, but deepfake technology is getting better and better, and experts have long warned that it could cause chaos in fragile political environments, especially around elections. On Dec. 14, the pope, perhaps bothered by the uproar about his fantastical drip, called for an international treaty to ensure the ethical deployment of AI technologies, warning that it could disrupt democracy or enhance already deadly weapons of war.
5. The open letter: In March, a group of AI scientists and researchers called for a six-month pause on all AI development. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” their letter said. It was signed by technology luminaries and corporate leaders like Apple co-founder Steve Wozniak, MIT professor Max Tegmark, investor Ian Hogarth (who now leads the UK’s AI task force), and Elon Musk (who notably then launched his own AI company). Of course, development wasn’t stymied, but the letter did send a message that there are real and present dangers to the unmitigated development of artificial intelligence. They could have just watched “The Terminator” if you ask me.
6. The Hollywood strike: Actors and writers hit the picket lines for months this year, putting many of our favorite shows on ice. The strike was inspired in no small part by threats from the major studios to use AI to replace union labor. At issue for the Screen Actors Guild was the use of AI to digitally replicate union talent without compensation, while the Writers Guild was more concerned with the use of AI writing tools to shrink writers’ rooms and automate their work. Instead of banning the use of AI, however, both guilds struck deals with the studios that effectively ensure they don’t lose work or money because of the advent of AI. Digital replicas are okay, for example, if the actor is properly compensated.
7. OpenAI’s blowup: What in the world happened at OpenAI? The company’s nonprofit board of directors in late November suddenly and inexplicably fired Sam Altman, the face of the company and CEO of its for-profit arm, for being dishonest with them. But the board never really explained itself. After a weekend of pressure from Altman, OpenAI’s lead investor Microsoft, and 700 of the 770 employees at OpenAI who threatened to quit and work instead at Microsoft, the board reinstated Altman and some of its members resigned. There are still big questions about what happened, but for a brief moment, the most unstoppable company in tech seemed extremely fragile.
The People
8. Sam Altman: Altman is the face of AI. He helms OpenAI, the company that makes the GPT series of large language models, the chatbot ChatGPT, and the image generator DALL-E. But he has also been the AI whisperer for regulators in the US and around the world. Altman played a hands-on role in calling for regulation – as long as it was the kind he likes, such as government licensing for AI developers – and that’s been effective in helping shape global governance of this emerging technology. Of course, Altman was fired and then reinstated (see above), and that was a never-before-seen drama in Silicon Valley. But the ordeal was so surreal and so shocking because Altman isn’t just the head of the most important company in AI; he’s the poster boy for the entire technology.
9. Jensen Huang: OpenAI might be the most important software company in AI, but NVIDIA rules hardware. Under Huang’s guidance, NVIDIA has gone from a little-known company making graphics cards for computer gamers to one of the most critical semiconductor firms in the world. NVIDIA’s graphics chips, or GPUs, are necessary for high-powered computing operations like training and running AI systems. Sure, there’s competition — the chipmaker AMD has grand ambitions to compete directly on AI-ready graphics chips. But it’s leading an industry with very little supply and a ton of demand. That’s one of the reasons why Huang led NVIDIA to a trillion-dollar valuation this year. He’s not as public a figure as Altman, but his work has proven invaluable this year.
10. Geoffrey Hinton: Known as the “godfather of AI,” Hinton distinguished himself over a long career as one of the most prolific and accomplished researchers of artificial neural networks, a set of technologies that powers machine learning. He even won the Turing Award in 2018 — the most esteemed prize in computer science. But this year, Hinton made headlines in May after quitting his job at Google and citing the risks of unfettered development of AI. A whistleblower of sorts, Hinton’s message is extra potent — because, in many respects, he made the breakthroughs that led to present-day AI.