Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Looking inside the black box
But now researchers at Anthropic, the AI startup that makes the chatbot Claude, claim they’ve had a breakthrough in understanding their own model. In a blog post, Anthropic researchers disclosed that they’ve found 10 million “features” of their Claude 3 Sonnet language model, with certain patterns that pop up when a user inputs something it recognizes. They’ve been able to map features that are close to one another: One for the Golden Gate Bridge, for example, is close to another for Alcatraz Island, the Golden State Warrior, California Governor Gavin Newsom, and the Alfred Hitchcock film Vertigo — set in San Francisco. Knowing about these features allows Anthropic to turn them on or off, manipulating the model to break out of its typical mold.
Google and OpenAI’s competition heats up
Both Google and OpenAI held big AI-focused events last week to remind the world why they should each be leaders in artificial intelligence.
Google’s announcement was wide-ranging. At its I/O developer conference, the company basically said that it’ll infuse AI into all of its products — yes, even its namesake search engine. If you’ve Googled anything lately, you might have noticed that Gemini, Google’s large language model, has started popping up and suggesting the answers to your questions. Google smells the threat of competition not only from ChatGPT and other chatbots that can serve as your personal assistant but also from AI-powered search engines like Perplexity, which we tested in February. It also announced Veo, a generative video model like OpenAI’s Sora, and Project Astra, a voice-assisted agent.
Meanwhile, OpenAI had a much more focused announcement. The ChatGPT maker said it’s rolling out a new version of its large language model, GPT-4o, and powering its ChatGPT app with it. The new model will act more like a voice-powered assistant than a chatbot — perhaps obviating the need for Alexa or Siri in the process if it’s successful. That said, how often are you using Alexa and Siri these days?
The future of AI, the company thinks, is multimodal—meaning models can process text, images, video, and sound quickly and seamlessly and spit out answers back at the users.
Most importantly, OpenAI said that this new ChatGPT app (on smartphones and desktops) will be free of charge — meaning millions of people who aren’t used to paying for ChatGPT’s premium service will now have access to its top model — though rate limits will apply. Maybe OpenAI realizes it needs to hook users on its products before the AI hype wave recedes — or Google leapfrogs into the consumer niche.
Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM
8.5 billion: Rising energy usage from AI data centers could lead to additional demand for natural gas of up to 8.5 billion cubic feet per day, according to an investment bank estimate. Generative AI requires high energy and water demands to power and cool expansive data centers, which climate advocates have warned could exacerbate climate change.
32 billion: Google is pouring $3 billion into data center projects to power its AI system. That budget includes $2 billion for a new data center in Fort Wayne, Ind., and $1 billion to expand three existing ones in Virginia. In earnings reports this week, Google, Meta, and Microsoft disclosed that they had spent $32 billion on data centers and related capital expenditures in the first quarter alone.
22: The US Department of Homeland Security announced a new Artificial Intelligence Safety and Security Board with 22 members including the CEOs of Alphabet (Sundar Pichai), Anthropic (Dario Amodei), OpenAI (Sam Altman), Microsoft (Satya Nadella), and Nvidia (Jensen Huang). The goal: to advise Secretary Alejandro Mayorkas on “safe and secure development and deployment of AI technology in our nation’s critical infrastructure.”
960 million: SoftBank, the Japanese technology conglomerate, plans to pour $960 million to upgrade its computing facilities in the next two years in order to boost its AI capabilities. The company’s broad ambitions include funding and developing a large language model that’s “world-class” and geared specifically toward the Japanese language.Bad-behaving bots: Copyright Office to the rescue?
It might not be the flashiest agency in Washington, DC, but the Copyright Office, part of the Library of Congress, could be key to shaping the future of generative AI and the public policy that governs it.
The New York Times reports that tech companies – no strangers to spending big bucks on lobbying – are lining up to meet with Shira Perlmutter, who leads the Copyright Office as Register of Copyrights. Music and news industry representatives have requested meetings too. And Perlmutter’s staff is inundated with artists and industry executives asking to speak at public “listening sessions.”
Copyright is central to the future of generative AI. Artists, writers, record labels, and news organizations have all sued AI companies, including Anthropic, Meta, Microsoft, and OpenAI, claiming they have broken federal copyright law by training their models on copyrighted works and, often, spit out results that rip off or replicate those same works.
The Copyright Office is set to release three reports detailing its position on the issue this year, which the Times says will be “hugely consequential” as lawmakers and courts grapple with nuanced questions about how intellectual property law jibes with the cutting-edge technology.
Regulating AI: The urgent need for global safeguards
There’s been a lot of excitement about the power and potential of new generative artificial intelligence tools like ChatGPT or Midjourney. But there’s also a lot to be worried about, like misinformation, data privacy, and algorithm bias, just to name a few.
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus lays out the case for effective, comprehensive, global regulation when it comes to artificial intelligence.
Because of how fast the technology is developing and its potential impact on everything from elections to the economy, Marcus believes that every nation should have its own AI agency or cabinet-level position. He also believes that global AI governance is crucial, so that AI safety standards are the same from country to country.
“We need to move to something like the FDA model,” Marcus tells Bremmer on GZERO World, “If you’re going to do something that you deploy on a wide scale, you have to make a safety case.”
Watch the GZERO World episode: Is AI's "intelligence" an illusion?
And watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI comes to Capitol Hill ›
- Is AI's "intelligence" an illusion? ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›