Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Science & Tech
Meta, the parent company of Facebook and Instagram, has taken a different approach to the AI boom than many of its Silicon Valley peers. Instead of developing proprietary large language models, Meta has championed open-source models that are free and accessible for anyone to use. (That said, some open-source advocates say it’s not truly open-source because Meta has usage rules.)
But because of Meta’s openness, Chinese researchers were able to develop their own AI model — for military use — using one of Meta’s Llama models, according to a paper they published in June, but first reported by Reuters on Nov. 1.
Chinese university researchers, some of whom have ties to the People's Liberation Army, developed a model called ChatBIT using Llama 2 — first released in February 2023. (Meta’s top model is Llama 3.2, released in September 2024.) In the paper reviewed by Reuters, the researchers said they built a chatbot “optimized for dialogue and question-answering tasks in the military field.” It will be able to be used for “intelligence analysis, … strategic planning, simulation training, and command decision-making,” the paper said.
Llama’s acceptable use policy prohibits using the models for “military, warfare, nuclear industries or applications [and] espionage.” Meta told Reuters that the use did violate the terms and said it took unspecified action against the developers but also said the discovery was insignificant. “In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI,” Meta said.
Open-source development has already become a hot-button issue for regulators and tech advocates. For example, the California AI safety bill, which was vetoed by Gov. Gavin Newsom, became controversial for mandating developers have a “kill switch” to shut off models — something that’s not possible for open-source developers who publish their code. With an open-source model in China’s hands — even an old one — regulators may have the fodder they need to try and crack down on open-source AI the next time they try to pass and enact AI rules.Instead, these new drones use software from the Ukrainian company NORDA Dynamics, which uses computer vision — a type of artificial intelligence technology — to direct the drones to their targets. An unnamed Ukrainian official told Reuters this summer that the hit rates of manually controlled drones had fallen to 30–50%, and predicted at the time that the new drones could achieve 80% hit rates if successful.
Russia and Ukraine are racing toward automating their militaries — and sometimes that means drones vs. drones. For instance, the Ukrainian military is using drones to take down Russian camera reconnaissance drones that help Russian forces identify targets on the ground in Ukraine. The Washington Post has also reported that Russian drones have indiscriminately targeted civilians in the Ukrainian city of Kherson. It’s unclear whether Ukraine’s new drones can down these exact drones yet, but it’s clear that the two warring countries are already engaged in a drone-on-drone war.Americans in 50 states and Washington, DC, are headed to the polls today to vote for the next president of the United States. While neither Vice President Kamala Harris nor former President Donald Trump has given much attention to artificial intelligence on the campaign trail — and AI hasn’t completely disrupted the election process as some experts feared — there are still important questions surrounding AI and the election.
For one, could AI-generated disinformation or deepfakes sow chaos that affects the results of the election? The hours and days ahead — both as Americans vote and as local officials count the vote — are crucial.
Earlier this year, election security experts and officials warned that AI-generated information could flood the campaign trial. While some has surfaced — including a fake Joe Bidenrobocall during New Hampshire’s primary and when Elon Musk shared an AI-generated video mocking Harris on his platform, X — its impact hasn’t been widespread.
Meanwhile, the US intelligence community has been proactive in identifying when foreign actors used AI to carry out influence operations. The Office of the Director of National Intelligence issued a statement in September noting that foreign countries such as China, Iran, and Russia have used AI to target voters about the election and political issues, such as immigration and the US-Gaza conflict.
But the most sensitive time for disinformation may still be ahead. “If it's extremely close, that gives more license for disinformation to run around because it’s easier to believe,” said Scott Bade, a senior geo-technology analyst with Eurasia Group. While fake information might not be AI-generated, he said, there could be something that tricks people and goes viral, such as an AI-generated video or image purporting to show fraud at a polling station.
Further, voting rights groups have issued warnings that Spanish-language voters are seeing more AI-generated misinformation about the election than are English speakers. This language gap could cause additional confusion at a time when the Latino vote has become a central point of intrigue in the election, especially after a comedian at a recent Trump rally made racist comments about Puerto Rico.
The election results will also impact how AI policy is shaped for the next four years — a critical time for this emerging technology. Trump’s approach has emphasized deregulation. Trump has criticized the CHIPS Act, under which the US has given subsidies to foreign companies like TSMC and Samsung to build in the US and cement America’s chip advantage over China. Trump wants to be seen as tough on China but prefers tariffs rather than subsidies.
“Under Trump, funding for AI research would likely prioritize military applications and national security, reflecting his America First agenda,” said Esteban Ponce de León, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab, “whereas, Harris aims to direct funding toward societal challenges like health care disparities and climate change.”
Harris would likely continue Biden’s legacy on artificial intelligence, continuing to roll out incremental rules to rein in the tech industry if she becomes president, but her ability to push through a more serious AI agenda depends on the makeup of Congress — and, if polling is to be believed, Democrats are longshots to take the Senate even if the House and presidency are within reach.
Even if artificial intelligence hasn’t been front and center thus far this election cycle, there’s no guarantee it won’t still be. And the dam has broken, which means AI will be an unavoidable consideration of election security officials for years to come.
Hard Numbers: Hey big spender, an iPhone boost, Google’s robot coders, Super Micro’s super downfall
200 billion: Capital expenditures from four of the largest US tech companies — Amazon, Microsoft, Meta, and Google — are set to exceed $200 billion this year, inflated by enormous spending on artificial intelligence software and hardware investments. Amazon’s spending alone surged 81% in a year, leading CEO Andy Jassy to assure investors the company’s bets will pay off. These are record sums at a time when Wall Street seems hesitant to keep rewarding excessive spending on AI.
46 billion: Apple reversed its fortunes after a bad year of iPhone sales, selling more than $46 billion of its signature smartphone between July and September — a 6% increase year over year. The company’s new iPhone 16 is part of its push into artificial intelligence — marketed as a phone capable of handling all of its Apple Intelligence features, such as a supercharged Siri, new writing tools, and call transcription — which started rolling out last week. The company hopes that AI can convince customers old and new that it’s time to pay up for a new iPhone, which starts at $799.
25: More than 25% of all new code produced by Google is written by artificial intelligence, according to CEO Sundar Pichai. AI produces the code, which is then reviewed and accepted by human engineers. A recent Stack Overflow survey found that 76% of all software developers are using or are planning to use AI to code.
45: Super Micro Computer, a key supplier of Nvidia servers, saw its stock fall 45% after its auditor, Ernst & Young, resigned because it was “unwilling to be associated with the financial statements prepared by management.” Once one of the hottest AI stocks, the company has now wiped out all of its 2024 gains.When Elon Musk acquired X (formerly Twitter), he pledged to rid it of bots, or fake accounts that tend to serve as trolls and conduits for misinformation. “We will defeat the spam bots or die trying,” Musk tweeted in 2022, a few months before he officially bought the social media platform.
But a new analysis by Cyabra, in partnership with GZERO, found that roughly 20% of the accounts interacting with election-related tweets from Musk were, in fact, bots.
Cyabra analyzed five notable Musk posts that pertained to issues like the endorsement and competence of the two presidential candidates, concerns over free speech under a potential Harris administration, and immigration policies. It found that “bot-driven accounts dominated much of the conversation, with their sentiment and content suggesting an agenda to influence public perception and even hinting at potential coordinated activity among bot communities.”
These inauthentic accounts “were responsible for driving a disproportionately large share of the engagement and traffic.”
In two additional posts analyzed, ones in which Musk firmly positioned himself against the Harris-Walz ticket, 40% of the activity was driven by inauthentic accounts. “A closer examination of the engagement revealed coordinated activity between these inauthentic accounts, with two distinct bot clusters working in tandem to amplify traffic and drive engagement,” Cyabra’s report said.
While Musk often laments the spread of disinformation in the digital era in which we live, he frequently spreads it himself to hundreds of millions of followers — and the site he owns continues to be at the heart of the problem.
US President Joe Biden may have just uttered his last official word on artificial intelligence. Just days before the 2024 presidential election — a race that will put either Vice President Kamala Harris or former President Donald Trump in the Oval Office — Biden outlined his case for military-industrial cooperation on AI to get the most out of the emerging technology.
The new National Security Memorandum outlines new ways to accelerate the safe, secure, and responsible use of AI in the US defense agencies and intelligence community.
The NSM, released Oct. 24, is a follow-up to Biden’s October 2023 executive order on AI. It directs federal agencies to “act decisively” in adopting AI while safeguarding against potential risks. The memo names the AI Safety Institute, housed within the Commerce Department, as the primary point of contact between the government and the private sector's AI developers. It requires new testing protocols, creates an AI National Security Coordination Group to coordinate policies across agencies, and encourages cooperation with international organizations like the UN and G7.
“Many countries — especially military powers — have accepted that AI will play a role in military affairs and national security,” said Owen Daniels, associate director of analysis at Georgetown University's Center for Security and Emerging Technology. “That AI will be used in future operations is both inevitable and generally accepted today, which wasn't the case even a few years ago.” Daniels says AI is already being used for command and control, intelligence analysis, and targeting.
The boring uses of AI might be the most important. Experts told GZERO that the immediate applications of military adoption of AI are much less dramatic than early reports of AI-enabled weaponry that can identify, seek out, and destroy a target without human intervention.
“When AI started heating up in the last few years, a lot of people in the military thought ‘killer robots, lethal autonomous weapon systems — this is the next thing,’” said Samuel Bresnick, a research fellow at CSET and colleague of Daniels’. “But what’s becoming clear is that AI is really well-suited to the ‘tail end’ of military operations — things like logistics and bureaucracy — rather than the ‘head end’ of targeting and weapons systems.” Bresnick said that even if AI can help military personnel do mundane tasks like filling out expense reports, tracking supplies, and managing logistics, in aggregate that could restore meaningful man-hours that can boost the military.
This focus on improving efficiency is reflected in the memo's emphasis on generative AI models, which excel at processing large amounts of data and paperwork, according to Dean Ball, research fellow at the libertarian Mercatus Center at George Mason University. “Our national security apparatus collects an enormous amount of data from all over the world each day,” he said. “While prior machine learning systems had been used for narrow purposes — say, to identify a specific kind of thing in a specific kind of satellite image — frontier systems can do these tasks with the broader ‘world knowledge’” that AI companies like OpenAI have collected from lots of different domains that, combined with proprietary data, could aid defense and intelligence analysts.
Beyond number-crunching and complex data analysis, the technology could also enable sophisticated modeling capabilities. “If you’re undergoing a massive nuclear buildup and can't test new weapons, one way to get around that is to use powerful AI systems to model nuclear weapons designs or explosions,” Bresnick said. Similar modeling applications could extend to missile defense systems and other complex military technologies.
While Ball found the NSM rather comprehensive, he worries about the broader Biden administration effort to rein in AI as something that could “slow down adoption of AI by all sorts of businesses” and reduce American competitiveness.
While the focus of the memo is national security, its scope extends to other areas meant to boost the private AI industry too. The memorandum specifically calls for agencies to reform hiring practices such as visa requirements to import AI talent and improve acquisition procedures to better take advantage of private sector-made AI. It also emphasizes the importance of investing in AI research from small businesses, civil society groups, and academic institutions — not just Big Tech firms.
Calls for the ethical use of AI. US National Security Advisor Jake Sullivan emphasized the urgency of the memo in recent remarks at the National Defense University in Washington, DC — noting that AI capabilities are advancing at “breathtaking” speed with implications for everything from nuclear physics and rocketry to stealth technology. Sullivan emphasized developing and deploying AI responsibly when it comes to AI in a national security context. “I emphasize that word, ‘responsibly,’” he said. “Developing and deploying AI safely, securely, and, yes, responsibly, is the backbone of our strategy. That includes ensuring that AI systems are free of bias and discrimination.” Sullivan said the US needs fair competition and open markets, and to respect privacy, human rights, civil rights, and civil liberties as it pushes forward on AI.
He said that acting responsibly will also allow the US to move quickly. “Uncertainty breeds caution,” Sullivan wrote. “When we lack confidence about safety and reliability, we’re slower to experiment, to adopt, to use new capabilities — and we just can’t afford to do that in today’s strategic landscape.”
As the United States seeks to gain a strategic edge over China and other military rivals using artificial intelligence, it’s leaving no stone unturned. The US Treasury Department even finalized new rules restricting US investment into Chinese artificial intelligence, quantum computing, and chip technology this week.
America’s top national security officials want to ensure they’re building AI capacity, gaining an advantage over China, and deploying this technology responsibly — lest they risk losing popular support for an AI-powered military. That’s a strategic misstep they’re not willing to make.
Moments before Sewell Setzer III took his own life in February 2024, he was messaging with an AI chatbot. Setzer, a 14-year-old boy from Florida, had struck up an intimate and troubling relationship — if you can call it that — with an artificial intelligence application styled to simulate the personality of “Game of Thrones” character Daenerys Targaryen.
Setzer gave numerous indications to the chatbot, developed by a company called Character.AI, that he was actively suicidal. At no point did the chatbot break character, provide mental health support hotlines, or do anything to prevent the teen from harming himself, according to a wrongful death lawsuit filed by Setzer’s family last week. The company has since said that it has added protections to its app in the past six months, including a pop-up notification with the suicide hotline. But that’s a feature that’s been standard across search engines and social media platforms for years.
The lawsuit, filed in federal court in Orlando, also names Google as a defendant. The Big Tech company hired Character.AI’s leadership team and paid to license its technology in August, the latest in a spate of so-called acqui-hires in the AI industry. The lawsuit alleges that Google is a “co-creator” of Character.AI since its founders initially developed the technology while working there years earlier.
It’s unclear what legal liability Character.AI will have. Section 230 of the Communications Decency Act, which largely protects internet companies from civil suits, is untested when it comes to AI chatbots because it protects companies from speech posted by third parties. In the case of AI chatbots, the speech is directly from an AI company, so many experts have predicted that it won’t apply in cases like this.