Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What’s up with Worldcoin?
Sam Altman wants to scan the eyeballs of every single person on Earth with an orb-shaped scanner and then pay them with cryptocurrency. This eye-raising proposition is called Worldcoin — also the name of the crypto coin in question — and seeks to solve a problem straight from science fiction: In the future, what if we can’t tell humans and robots apart?
Perhaps unsurprisingly, this strange initiative has received pushback from governments around the world concerned about the biometric privacy of their citizens. Its operations were shut down in Spain and Portugal in March and in Hong Kong in May. It was investigated by Kenyan authorities who later dropped the probe.
Worldcoin’s ability to operate in Europe will be determined in the coming weeks when the Bavarian data protection authority is set to rule on whether it’s compliant with GDPR, the European data privacy law.
The company says that about 6.5 million people worldwide have gotten scanned. That includes people in the US, where there are five locations where people can visit an orb and get their eyeball scanned: Atlanta, Los Angeles, New York, Palo Alto, and San Francisco. It has not been widely scrutinized by US regulators, but that could change if Europe takes a strong position on Altman’s side hustle.What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.Hard Numbers: Scarlett Johansen’s voice on ChatGPT, Sony Music’s warning, Energy drain, Stability AI’s instability, Sharing the love — and the GPUs
2: Film star Scarlett Johanssonturned down OpenAI’s Sam Altman twice when he asked to use her voice for ChatGPT’s speech applications. She said no, but OpenAI has released a voice called “Sky” that sounds similar to Johansson. The actress (well, at least her voice) starred in the 2013 film “Her”— which Altman has called his favorite movie — portraying a disembodied AI that the protagonist becomes infatuated with. OpenAI says it hired another actress to voice “Sky,” but the company has now removed the voice “out of respect for Ms. Johansson.”
700: Sony Music sent letters to 700 AI developers and music streaming companies telling them it’s “opting out” of letting them use its content for training models. That includes musical compositions as well as lyrics, recordings, music videos, and album artwork. Last year, AI-generated songs featuring the fake voices of Drake and The Weeknd became a viral smash on social media — but music publishers aren’t in the habit of licensing their assets for free.
30: Microsoft reported that between 2020 and 2023 its energy emissions jumped 30%, a sign of the huge toll that artificial intelligence could take on the planet. Microsoft wants to be carbon negative by 2030, but its generative AI initiatives have hampered progress toward that goal.
1 billion: Amid a cash crunch, Stability AI is reportedly exploring a sale. The startup, which makes the Stable Diffusion image generator, was valued at $1 billion in 2022. The biggest question is who can buy it? The Biden administration has chilled the merger and acquisition market, taking an especially aggressive approach to litigating alleged antitrust allegations throughout Silicon Valley.Chuck Schumer’s light-touch plan for AI
Over the past year, Senate Majority Leader Chuck Schumer (D-NY) has led the so-called AI Gang, a group of senators eager to study the effects of artificial intelligence on society and curb the threats it poses through regulation. But calling this group a gang implies a certain level of toughness that was nowhere to be found in the roadmap it unveiled on May 15.
Announcing the 31-page roadmap, a bipartisan set of policy priorities for Congress, Schumer bragged of “months of discussion,” “hundreds of meetings,” and “nine first-of-their-kind AI Insight Forums,” including sessions with OpenAI’s Sam Altman and Meta’s Mark Zuckerberg.
What he delivered, however, was more of a spending plan than a vision for real regulation – the policy proposals were limited, and the approach was hands-off. The roadmap called for $32 billion over the next three years for artificial intelligence-related spending for research and innovation. It offered suggestions, such as a federal data privacy law, legislation to curb deepfakes in elections, and a ban on “social scoring” like the social credit system that China has tested.
Civil society groups aren’t pleased
The long list of proposals is “no substitute for enforceable law – and these companies certainly know the difference, especially when the window to see anything into legislation is swiftly closing,” the AI Now Institute’s Amba Kak and Sarah Myers West wrote in a statement. Maya Wiley, CEO of the Leadership Conference on Civil and Human Rights, wrote that “the framework’s focus on promoting innovation and industry overshadows the real-world harms that could result from AI systems.”
Ronan Murphy of the Center for European Policy Analysis wrote that the gap between the US and EU approaches to AI could not be more stark. “US lawmakers believe it is premature to restrain fast-moving AI innovation,” he wrote. “In contrast, the EU’s AI Act bans facial recognition applications and tools that exhibit racial or other discrimination.”
Former White House technology advisor Suresh Venkatasubramaniantweeted that the proposal felt so unoriginal and recycled that it might have been written by ChatGPT.
An AI law is unlikely this year
Adam Conner, vice president of tech policy at the Center for American Progress, said that while the roadmap has some areas of substance, such as urging a federal data privacy law, “most sections are light on details.” He called the $32 billion spending proposal a “detailed wish list” for upcoming funding bills.
It was a thin result for something that took so long to cook up, he said, and “leaves little time on the calendar this year for substantive AI legislation, except for the funding bills Congress must pass this year and possibly the recently introduced bipartisan bicameral American Privacy Rights Act data privacy bill.” This means any other AI legislation will likely have to wait until next year. “Whether that was the plan all along is an open question,” Conner added.
Danny Hague, assistant director of Georgetown University’s Center for Security and Emerging Technology, agreed that it’s unlikely anything comprehensive gets passed this year. But he doesn’t necessarily see the report as a sign that the US will be hands-off with legislation. He said the Senate Working Group likely realizes that “time is limited,” and there are already “structures in place — regulatory agencies and the congressional committees that oversee them — to act on AI quickly.”
Jon Lieber, managing director for the United States for Eurasia Group, said he didn’t understand why an AI Gang was necessary at all. “I’m confused why Schumer felt the need to do something here,” he said. “This process should have been handled by a senate committee, not the leaders office.
Such a soft line from Congress means that until further notice, President Joe Biden — who has issued an executive order, export controls, and CHIPS Act funding to create jobs, secure tech infrastructure, and directed his agencies to get up to speed on AI — might just be the AI regulator in chief.
Newspapers fight back, file suit against OpenAI
Eight major newspapers owned by Alden Global Capital sued ChatGPT maker OpenAI on Tuesday in federal court, alleging copyright infringement. The group includes major regional newspapers, such as the New York Daily News, the Chicago Tribune, and the Orlando Sentinel.
While many publishers have struck lucrative licensing deals with AI companies, allowing them to train their large language models by scraping their websites, others have opted for litigation, most notably the New York Times. The Grey Lady sued OpenAI in December, alleging the Sam Altman-led startup violated federal copyright law by illegally training its model on Times journalism and spitting out responses indistinguishable from what people can find on their website or in their print newspaper. OpenAI has said the suit is “without merit.”
The Alden group also followed the Times' lead in suing Microsoft, too. Microsoft is OpenAI's biggest investor, having poured $13 billion into the startup, and uses the GPT suite of language models in its Copilot assistant and Bing search products.If a large language model proprietor is found to have violated copyright statutes, it could pose an existential threat to that model — meaning it may have to start training a new one from scratch.
Musk takes OpenAI to court
Tesla CEO Elon Musk sued OpenAI and its CEO Sam Altman late last week, saying that they breached the terms of a contract by prioritizing their profits over the public good. In 2015, Musk helped found and fund OpenAI, the artificial intelligence research lab-turned-industry leader. He resigned as co-chair of the company’s nonprofit board of directors in 2018, citing conflicts of interest with his own company, Tesla, which was investing heavily in AI.
Now, Musk alleges that OpenAI violated the terms under which he gave money to OpenAI, but no one seems to have written down those terms.
The Verge points out that the complaint hinges on the violation of a “Founding Agreement,” an alleged oral contract that Musk feels was formed in the course of business discussions. If a court finds that a contract was formed – and courts aren’t usually friendly to oral contracts – Musk is requesting that the court compel OpenAI to revert back to its original nonprofit mission, including making research data publicly available, instead of the profit-motivated one that’s turned it into a $80 billion juggernaut.
There’s one other thing that Musk-watchers should keep in mind: Musk currently runs an AI startup of his own, xAI, which has a chatbot called Grok. This means his business directly competes with OpenAI. Is it any wonder he’s resorting to litigation that could take OpenAI down a peg?
OpenAI’s Altman incident under investigation
Two investigations may soon shed light on one of the biggest mysteries in Silicon Valley: Why was Sam Altman fired from OpenAI?
To recap, the OpenAI board fired Altman in November, saying he was not “consistently candid in his communications,” but it failed to provide specifics (the big mystery). OpenAI’s staff and lead investor, Microsoft, immediately protested the ouster and successfully campaigned for Altman’s reinstatement – and for fresh faces on the nonprofit board.
The US Securities and Exchange Commission is now investigating whether OpenAI misled its investors in firing Altman. Meanwhile, the law firm WilmerHale is conducting an internal investigation of the Altman firing and will soon present its findings to the current board of directors, which commissioned the review.
Altman’s alleged deceit may have something to do with his plans to raise trillions of dollars for a chip venture, something that’s come to light in the months since this debacle. We have our ear to the ground for where the investigations are headed, and what it could mean for the giant of genAI.How OpenAI CEO Sam Altman became the most influential voice in tech
OpenAI CEO Sam Altman has become the poster child for AI, but it's difficult to understand his motivations.
Artificial intelligence was a major buzzword at the World Economic Forum in Davos this year, and OpenAI CEO Sam Altman was the hottest ticket in town. CEOs and business leaders crowded into sold-out conference halls to hear his take on the current explosion in generative AI and where the technology is headed.
On GZERO World, Ian Bremmer sat down with AI expert and author Azeem Azhar and asked why everyone, both at Davos and in the tech community as a whole, seems to be pinning their hopes and fears about the future of AI on Altman. Azhar says that there are actually a lot of similarities between the individual and the technology he works on.
“I like to think of [Altman] as someone who has been fine-tuned to absolute perfection,” Azhar explains, “And fine-tuning is what you do to an AI model to get it to go from spewing out garbage to being as wonderful as ChatGPT is. And I think Sam’s gone through the same process.”