We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
Microsoft has revealed that it has its own artificial intelligence that’s just for spies. Not you, not your friends, just spies (unless your friends are spies).
This marks the first time a company has deployed a large language model fully independent from the internet, Bloomberg reported. It’s a significant departure from existing models, and it’s designed to ensure safety and security for the US national security apparatus and its personnel. Still, it’s based on GPT-4, OpenAI’s industry-standard model that powers the paid version of ChatGPT. (Microsoft is the lead investor in OpenAI, having poured $13 billion into it.)
The model is “air-gapped,” meaning it’s cut off from the internet. But it’s also unique in that it doesn’t learn from the things people type in, and is careful to not spread secrets from one user to another.
“You don’t want it to learn on the questions that you’re asking and then somehow reveal that information,” William Chappell, Microsoft’s chief technology officer for strategic missions and technology, told Bloomberg. The system went live on May 9, but it still needs to go through testing and accreditation before national security agencies can use it.On May 8, Joe Biden spoke at Gateway Technical College in Racine, Wisconsin. The president was bragging.
Six years after his predecessor, Donald Trump, visited the same city to boast of Taiwanese tech company Foxconn’s $10 billion plan to bring a LCD manufacturing plant to Racine — that never materialized — Biden chose the same site for a new high-tech manufacturing project of his own. Microsoft will invest $3.3 billion to build a new data center to support artificial intelligence, a project that the company says will bring 2,000 permanent jobs and 2,300 union construction jobs to Wisconsin.
It’s good business, and better politics. Wisconsin is an important swing state for Biden in his forthcoming election against Trump. This latest announcement seemed to mark a moment where Biden accepted that AI is going to be an important part of his presidential legacy — and that it’s a record he should run on.
Right place, right time
OpenAI ushered in the generative AI revolution with ChatGPT midway through Biden’s first term. Silicon Valley rushed to develop it, Wall Street rushed to fund it, and governments around the world rushed to regulate it. Biden was in just the right position to reap the political rewards.
The US hasn’t passed comprehensive regulation to rein in AI, lagging behind its European counterparts in that regard, because it would require Congressional action. Instead, Biden secured voluntary commitments from the top AI companies to reduce the risks of their technology and issued a sweeping executive order dictating that every federal agency and department needs to assess and mitigate the risks AI poses, and how they can safely use it.
Beyond that, AI has become a focus of Biden’s industrial policy and export control measures, both of which have major implications for foreign policy and national security. Microsoft's investment also comes mere weeks after the Biden administration helped orchestrate the PC giant’s $1.5 billion investment in the Emirati tech giant G42, which pledged to restrict ties with China in favor of working with US tech firms.
Federal dollars pour into AI
The Microsoft data center was one in a series of chest-pounding announcements from the Biden administration, which has used funds from the CHIPS and Science Act to incentivize tech infrastructure firms to build in the United States. Taiwan Semiconductor Manufacturing Company will get $6.6 billion to invest a total of $65 billion to expand its chip fabrication complex in Phoenix, Arizona. Samsung will get $6.4 billion to pour $45 billion into its Texas facilities, and Intel will be granted $8.5 billion to construct and expand facilities in Arizona, Ohio, New Mexico, and Oregon.
AI wasn’t necessarily top of mind when the CHIPS Act passed in 2022, said Scott Bade, a senior analyst in Eurasia Group’s geo-technology practice, but it’s become the focus of the government’s efforts to nationalize chips and data centers.
“If you look at the political motivations for the Chips Act, a big part of that was the auto industry not having access to chips during the pandemic,” Bade said. Most of those were so-called legacy chips, not the high-powered graphics processors needed for AI, but the investments and legislation was already in place by the time AI became the hot topic in consumer and military tech.
The US has an advantage over rival China when it comes to artificial intelligence technology, but also the chips and chip-making facilities necessary to train and run powerful AI applications. Not only are many of the most important AI chipmakers — such as Nvidia, AMD, and Intel — American firms, but important non-US infrastructure firms are subject to US export controls because they rely on small parts made in America. The Biden administration has ramped up export controls to give the US an economic and technological advantage over China. And don’t forget the military side — global powers are looking to AI to super-charge their weaponry.
The election looms
AI lets Biden make some important claims in his rematch against Trump, including:
- American companies are leading the world on AI
- Multinational firms are investing in US facilities
- They’re bringing high-tech manufacturing jobs to the US
- And the US is keeping China at bay in the AI space
Not all of those arguments will resonate in retail politics, but Arizona and Wisconsin, where new facilities are popping up, are key swing states looking for good union jobs. In Wisconsin especially, Biden will make the case that he’s delivering what Trump couldn’t.
“The fact that you have a fab in a major swing state that helped him win last time and also has an important Senate race — that's not a coincidence,” Bade noted.
Speaking in Wisconsin, Biden barely mentioned technology, let alone artificial intelligence. Instead, he focused on delivering where Trump could not.
“During the previous administration, my predecessor made promises, which he broke more than kept, left a lot of people behind in communities like Racine,” Biden said.
Artificial intelligence might not be the snazziest talking point for retail politics, but it’s bound to be a major undercurrent — even when it’s not mentioned explicitly.
Google updated one of its most potent artificial intelligence applications, an AI model called AlphaFold — and the latest version can model “all of life’s molecules,” the company said last week. Yeah, all of them.
While the previous version of the model could simply predict the structures of proteins, Alpha Fold 3 can actually model DNA and RNA, plus small molecules called ligands. Google’s DeepMind and Isomorphic Labs divisions, which worked on AlphaFold, said the new model is a 50% improvement over the last one.
“It tells us a lot more about how the machines of the cell interact,” John Jumper, a Google DeepMind researcher, told The New York Times. “It tells us how this should work and what happens when we get sick.”
With this technology, researchers can leap ahead in the fundamental techniques they use to model biological systems, test and develop new drugs, and build new materials. Google is letting researchers around the world use the model through a website called AlphaFold Server for any non-commercial research. Here’s hoping we can report on the first lifesaving drugs developed through AI soon.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›
Apple is leveling up its chip ambitions. The Silicon Valley technology giant has spent years designing chips for its own hardware — for Macs, iPhones, iPads, and more. But, running AI models requires higher-grade chips like NVIDIA's graphics processors, which have become industry standard.
To keep up, and to fuel its own AI ambitions, Apple is working on its own chips, according to a report in the Wall Street Journal, designed to support AI applications from servers in large data centers. Internally, the project is code-named ACDC, short for Apple Chips in Data Center, though it has no set timeline for completion.
Apple's chips are reportedly meant for running AI applications, rather than training them, which makes sense given Apple's consumer focus. Apple has yielded the first leg of the AI race to upstarts like OpenAI and Anthropic, as well as to incumbents Microsoft and Meta, but the view from Cupertino is clearly better-late-than-never.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
Hard Numbers: Microsoft takes Malaysia, Massive (and unknown) startup, Safety first, Don’t automate my news
2.2 billion: Microsoft has its eye on Southeast Asia. The computing giant announced it’ll pour $2.2 billion into Malaysia’s cloud infrastructure over the next four years and will establish a national AI center with the government. This investment is the latest in a string of Microsoft infusions in local economies to help develop AI: In the past month, the company announced a $2.9 billion investment in Japan, $1.7 billion in Indonesia, and a new data center in Thailand, plus a $1.5 billion stake in the UAE firm G42.
19 billion: There’s a $19 billion AI startup that you’ve likely never heard of. It’s called CoreWeave, and it started as a small crypto company that stockpiled powerful graphics chips. Now, it runs data centers that are in high demand from AI companies that need to access those chips to run their models. It’s a company that has quickly “come out of nowhere,” as its cofounder said, to play a major role in the booming AI economy.
2: AI safety research comprises only 2% of total research about artificial intelligence, according to a new report from Georgetown University’s Emerging Technology Observatory. That’s dwarfed by global research into subjects such as computer vision (32%), robotics (15%), and natural language processing (11%).
42: In the run-up to the 2024 presidential election, 42% of Americans are concerned that news organizations will create stories with generative AI, according to a new poll from the Associated Press and the American Press Institute. While news organizations have been using AI to write simple stories — such as earnings-related stories and sport recaps — for years, those that have turned to generative AI in recent years to replace human-written stories have received public pushback and condemnation.