US President Joe Biden may have just uttered his last official word on artificial intelligence. Just days before the 2024 presidential election — a race that will put either Vice President Kamala Harris or former President Donald Trump in the Oval Office — Biden outlined his case for military-industrial cooperation on AI to get the most out of the emerging technology.
The new National Security Memorandum outlines new ways to accelerate the safe, secure, and responsible use of AI in the US defense agencies and intelligence community.
The NSM, released Oct. 24, is a follow-up to Biden’s October 2023 executive order on AI. It directs federal agencies to “act decisively” in adopting AI while safeguarding against potential risks. The memo names the AI Safety Institute, housed within the Commerce Department, as the primary point of contact between the government and the private sector's AI developers. It requires new testing protocols, creates an AI National Security Coordination Group to coordinate policies across agencies, and encourages cooperation with international organizations like the UN and G7.
“Many countries — especially military powers — have accepted that AI will play a role in military affairs and national security,” said Owen Daniels, associate director of analysis at Georgetown University's Center for Security and Emerging Technology. “That AI will be used in future operations is both inevitable and generally accepted today, which wasn't the case even a few years ago.” Daniels says AI is already being used for command and control, intelligence analysis, and targeting.
The boring uses of AI might be the most important. Experts told GZERO that the immediate applications of military adoption of AI are much less dramatic than early reports of AI-enabled weaponry that can identify, seek out, and destroy a target without human intervention.
“When AI started heating up in the last few years, a lot of people in the military thought ‘killer robots, lethal autonomous weapon systems — this is the next thing,’” said Samuel Bresnick, a research fellow at CSET and colleague of Daniels’. “But what’s becoming clear is that AI is really well-suited to the ‘tail end’ of military operations — things like logistics and bureaucracy — rather than the ‘head end’ of targeting and weapons systems.” Bresnick said that even if AI can help military personnel do mundane tasks like filling out expense reports, tracking supplies, and managing logistics, in aggregate that could restore meaningful man-hours that can boost the military.
This focus on improving efficiency is reflected in the memo's emphasis on generative AI models, which excel at processing large amounts of data and paperwork, according to Dean Ball, research fellow at the libertarian Mercatus Center at George Mason University. “Our national security apparatus collects an enormous amount of data from all over the world each day,” he said. “While prior machine learning systems had been used for narrow purposes — say, to identify a specific kind of thing in a specific kind of satellite image — frontier systems can do these tasks with the broader ‘world knowledge’” that AI companies like OpenAI have collected from lots of different domains that, combined with proprietary data, could aid defense and intelligence analysts.
Beyond number-crunching and complex data analysis, the technology could also enable sophisticated modeling capabilities. “If you’re undergoing a massive nuclear buildup and can't test new weapons, one way to get around that is to use powerful AI systems to model nuclear weapons designs or explosions,” Bresnick said. Similar modeling applications could extend to missile defense systems and other complex military technologies.
While Ball found the NSM rather comprehensive, he worries about the broader Biden administration effort to rein in AI as something that could “slow down adoption of AI by all sorts of businesses” and reduce American competitiveness.
While the focus of the memo is national security, its scope extends to other areas meant to boost the private AI industry too. The memorandum specifically calls for agencies to reform hiring practices such as visa requirements to import AI talent and improve acquisition procedures to better take advantage of private sector-made AI. It also emphasizes the importance of investing in AI research from small businesses, civil society groups, and academic institutions — not just Big Tech firms.
Calls for the ethical use of AI. US National Security Advisor Jake Sullivan emphasized the urgency of the memo in recent remarks at the National Defense University in Washington, DC — noting that AI capabilities are advancing at “breathtaking” speed with implications for everything from nuclear physics and rocketry to stealth technology. Sullivan emphasized developing and deploying AI responsibly when it comes to AI in a national security context. “I emphasize that word, ‘responsibly,’” he said. “Developing and deploying AI safely, securely, and, yes, responsibly, is the backbone of our strategy. That includes ensuring that AI systems are free of bias and discrimination.” Sullivan said the US needs fair competition and open markets, and to respect privacy, human rights, civil rights, and civil liberties as it pushes forward on AI.
He said that acting responsibly will also allow the US to move quickly. “Uncertainty breeds caution,” Sullivan wrote. “When we lack confidence about safety and reliability, we’re slower to experiment, to adopt, to use new capabilities — and we just can’t afford to do that in today’s strategic landscape.”
As the United States seeks to gain a strategic edge over China and other military rivals using artificial intelligence, it’s leaving no stone unturned. The US Treasury Department even finalized new rules restricting US investment into Chinese artificial intelligence, quantum computing, and chip technology this week.
America’s top national security officials want to ensure they’re building AI capacity, gaining an advantage over China, and deploying this technology responsibly — lest they risk losing popular support for an AI-powered military. That’s a strategic misstep they’re not willing to make.