Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.AI will get stronger in 2024
While its lawyers are suing the world’s most powerful AI firms, reporters at The New York Times’ are simultaneously trying to make sense of this important emerging technology — namely, how rapidly it’s progressing before our eyes.
On Monday, veteran tech reporter Cade Metz suggested that AI will get stronger in innumerable ways.
“The A.I. industry this year is set to be defined by one main characteristic: a remarkably rapid improvement of the technology as advancements build upon one another, enabling A.I. to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot,” Metz writes.
Huh? He’s referring to the advent of mass-market AI-generated video. Just like Midjourney and DALL-E brought AI-image generators to us in 2023, new tools will make it easy to type and generate whole videos made by AI.
Not only that, but popular chatbots like ChatGPT will become multimodal, meaning they can respond just as seamlessly with images, video, and audio as they do today with text. So perhaps there will be a true one-stop-shop for all your generative AI needs.
Logical reasoning of AI tools could also improve greatly this year, he suggests, allowing them to better function as “agents” to whom humans can delegate tasks and offload responsibilities.
Dust off your sci-fi classics: Smarter AI systems could power smart robots — though they’ll almost certainly invade factories first, rather than trying to become at-home personal butlers.