Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Meta’s AI is being used by Chinese military researchers
Meta, the parent company of Facebook and Instagram, has taken a different approach to the AI boom than many of its Silicon Valley peers. Instead of developing proprietary large language models, Meta has championed open-source models that are free and accessible for anyone to use. (That said, some open-source advocates say it’s not truly open-source because Meta has usage rules.)
But because of Meta’s openness, Chinese researchers were able to develop their own AI model — for military use — using one of Meta’s Llama models, according to a paper they published in June, but first reported by Reuters on Nov. 1.
Chinese university researchers, some of whom have ties to the People's Liberation Army, developed a model called ChatBIT using Llama 2 — first released in February 2023. (Meta’s top model is Llama 3.2, released in September 2024.) In the paper reviewed by Reuters, the researchers said they built a chatbot “optimized for dialogue and question-answering tasks in the military field.” It will be able to be used for “intelligence analysis, … strategic planning, simulation training, and command decision-making,” the paper said.
Llama’s acceptable use policy prohibits using the models for “military, warfare, nuclear industries or applications [and] espionage.” Meta told Reuters that the use did violate the terms and said it took unspecified action against the developers but also said the discovery was insignificant. “In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI,” Meta said.
Open-source development has already become a hot-button issue for regulators and tech advocates. For example, the California AI safety bill, which was vetoed by Gov. Gavin Newsom, became controversial for mandating developers have a “kill switch” to shut off models — something that’s not possible for open-source developers who publish their code. With an open-source model in China’s hands — even an old one — regulators may have the fodder they need to try and crack down on open-source AI the next time they try to pass and enact AI rules.How to train your AI — without humans
Meta, the parent company of Facebook and Instagram, has prided itself on releasing innovative open-source models as an alternative to the proprietary — or closed-source — models of OpenAI, Anthropic, and other leading AI developers. Now, it claims one of its newest models can evaluate other AI models. (That really is meta.)
Researchers at Meta’s Fundamental AI Research – yep, they call it their FAIR team – detailed their work on what they’re calling a “self-taught evaluator” in an August white paper ahead of the new model’s launch. The researchers sought to train an AI to evaluate models based not on human preference but on synthetic data. In short, Meta is trying to develop an AI model that can evaluate and improve itself without reliance on humans.
This could push AI to a place where it can sense its own imperfections and improve without being told to do so — a greater level of autonomy. Dystopian? Maybe.
Hard Numbers: Iran suspected of killing Afghan migrants, Meta busts lunch scheme, Venezuela jails more foreigners, US and NATO mark a decade of fighting ISIS
2 million: The United Nations has called for an investigation into reports that Iran’s security forces opened fire last weekend on roughly 200 Afghan migrants who had entered the country illegally, killing an unknown number of them. Iran has threatened to deport as many as 2 million undocumented Afghan migrants who live in the country as refugees from decades of war and famine in their home country.
25: There’s no free lunch, they say – but if there were, you certainly shouldn’t use the money to buy acne treatment pads, wine glasses, or laundry detergent. Meta has fired around two dozen employees in its Los Angeles office after they were caught using the company’s $25 meal allowances to purchase household items.
5: Venezuela has arrested five foreigners, including three Americans, on charges of terrorism. Since winning a heavily disputed election this summer, President Nicolas Maduro has cracked down on the opposition, accusing it of collaborating with foreign intelligence operatives. The recent arrests bring to 12 the number of foreigners detained in Venezuela.
10: The US and NATO allies on Thursday marked 10 years since the start of their campaign to defeat Islamic State, often referred to as “ISIS.” On the plus side, the terror organization was rooted out of its modern “caliphate” strongholds in Syria and Iraq. On the minus side, it has shown a growing presence and capability in the Sahel, where some local governments are pushing out Western forces, and Central Asia, where Islamic State is at war with the Taliban in Afghanistan and has managed to carry out attacks in Russia.
Posting this message won’t save you from Meta AI
If you’ve been on Facebook recently, you might have seen friends or even celebrities posting about Meta’s artificial intelligence. A viral message reads like this:
“Goodbye, Meta AI. Please note that an attorney has advised us to put this on; failure to do so may result in legal consequences. As Meta is now a public entity, all members must post a similar statement. If you do not post at least once, it will be assumed you are OK with them using your information and photos. I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.”
This message is legally bunk. Posting an image with these words offers people no legal protections against Meta or how it uses your data for training its AI. Additionally, Meta is only public in the sense that it’s been a publicly traded company on the Nasdaq stock market since 2012.
So, how can you actually opt out? Well, if you’re in the US, you can’t. In Europe and the UK, where there are privacy laws, you can follow these helpful instructions published by MIT Technology Review to keep what you post out of Meta’s training algorithms.
Your Facebook and Instagram posts are now AI-training data
Remember that embarrassing picture of you on Facebook? The one with the red solo cups in the background that you tried to hide from future employers? No, no not that one. The other one.
Meta’s global privacy director, Melinda Claybaugh, recently told Australian legislators that, yes, the company’s artificial intelligence systems are trained in part on users’ public posts on Facebook and Instagram. The Facebook data trove dates all the way back to 2007, a year after it opened its service to the public.
The company allows users to set their posts to public or private and maintains that only the public posts are used for training AI. In Europe, users can opt out of having their information used to train Meta’s language models due to the EU’s privacy laws, and in Brazil, Meta was recently ordered to stop using its citizens’ data for this purpose.
In the UK, Meta paused training its AI on users’ posts following an inquiry from Britain’s Information Commissioner’s Office but plans to resume doing so after answering the regulator’s questions.
Given these revelations, you can guess that if you ask Meta’s AI for “embarrassing pictures from college,” its responses might be a little too accurate.
Meta’s news ban in Canada has led to a media disaster. What does that mean for US efforts to wrangle big tech platforms?
It’s been a year since Meta yanked Canadian news from its platforms – Facebook, Instagram, and Threads – in response to a government bill that would see tech giants pay news outlets for linking to their online content. The Online News Act, which is similar to legislation passed in Australia, led to threats from both Meta and Google that they would pull news content originating in Canada. Google eventually struck a deal with media outlets; Meta did not, and it shows no sign of changing course a year later.
The full effects of Meta’s news ban are just coming to light. A report released this month by the Media Ecosystem Observatory finds that nearly half of online news media engagement has dropped in the last year, including 85% on Facebook and Instagram, a loss that “has not been compensated by increases on other social media platforms.”
It also finds that nearly a third of local news outlets that were active on social media are now dormant. What’s more, a whopping 75% of the public is unaware of the ban that has led to Canadians consuming less news – and more disinformation – than ever before.
“Canadians continue to learn about politics and current events through Facebook and Instagram,” the report summarizes, “but through a more biased and less factual lens than before, and many Canadians do not even realize the shift has occurred. They do not appear to be seeking news elsewhere.”
It’s a worst-of-all-worlds scenario in Canada as a struggling media industry and a growing online disinformation problem collide, depriving outlets of much-needed views and shares, and readers of access to reliable, high-quality journalism. In an ironic twist, a law meant to preserve news media by filling the coffers of news outlets, allowing them to keep staff and grow coverage, is contributing to its demise.
Will the Liberals stand up for their law?
The Liberal government isn’t backing down in the face of this new data, though. Ottawa is now saying Meta may still indeed be regulated by way of the Online News Act since some news is still sneaking through the block, which looks like a technicality but speaks to the government’s intention to double down on the law.
Moreover, there’s big money at stake. Google has signed a CA$100 million dollar deal to fund journalism. That money will be managed by small independent outlets focused on digital journalism. It’s too early to say what effect the money will have on news media in Canada, since the program is just starting to roll out, and there’s plenty still to be determined. But you know what they say – tens of millions here, tens of millions there, it eventually starts to add up to real money.
Graeme Thompson, a senior analyst with Eurasia Group’s global macro-geopolitics practice, says “The de-platforming of Canadian news content is probably not what the government expected when they launched their Online News Act, and it’s having the perverse effect that now Canadians are less exposed to quality, reliable journalism and reporting on social media platforms.”
But he doesn’t expect Canada will back down “unless there’s a change in government.”
By the next federal election in 2025, there may be one as the Conservatives are up in the polls. Conservative leader Pierre Poilievre has criticized the legislation and suggested it was “like 1984,” claiming it’s censorship and expressing concern that the government was trying to ban Canadians from seeing the news. That tone suggests he may be inclined to rescind the law, or at least change it.
US efforts to extract media payments from platforms are moving … slowly
Canada isn’t the only country working on securing payments from the tech sector to offset the harmful effect their advertising market dominance has on news media. But the Canadian experience may serve as a warning, or at least a lesson, for US lawmakers.
A bill before Congress, the Journalism Competition Act of 2023, would set a process for collective negotiation between news media and online platforms for payments to the former in exchange for access to their content. Introduced in 2021, in the last Congress, by Sen. Amy Klobuchar and again in 2023 for the current one, the bill is going nowhere fast. It has seen no movement since it was placed on the Senate’s legislative calendar in July 2023.
The state of California is considering a similar bill. Meta has threatened to block news there if the bill, which enjoys bipartisan support, passes. The California Journalism Preservation Act passed the state assembly 46-6 and is now in the Senate, where it’s working its way through committee in the face of opposition from tech giants who claim the bill won’t support local journalism, but rather act as a giveaway to hedge funds and big media companies.
A 2023 white paper found that Google and Meta made billions from linking to news – $21 billion and $4 billion respectively – and “owed” $10-12 billion and $1.9 billion annually to publishers as payment for the profit they make from news media content.
Scott Bade, a senior analyst with Eurasia Group’s geo-technology unit, expects the US won’t rush to emulate Canada’s approach. He notes that a divided Congress and looming election means lawmakers won’t be keen to make headway on a controversial tech regulation bill. The latest data from Canada won’t exactly spur action, either.
When it comes to California, where Democrats have control of the state legislature, the chances of a bill passing may be higher, he notes, but they still face intense lobbying from the tech industry, and it’s not a given that Gov. Gavin Newsom would even sign it.
A matter of life and death
The consequences of news bans are real. They suppress the capacity of media organizations to get factual, reported information to the public – and thus put media, particularly local outlets, at further risk of folding. They also facilitate the flow of unreliable information. The consequences of this dynamic can be quite literally a matter of life and death.
The recent far-right, anti-immigrant riots in the United Kingdom were violent and included fires being lit at hotels in which asylum-seekers were staying. There have been hundreds of arrests so far after online misinformation and disinformation spread, claiming that a Muslim immigrant was responsible for the stabbing deaths of three youths in Southport – in fact, the suspect is a non-Muslim who was born in Cardiff.
As Time reports, the riots were brutal as “[f]ar-right groups were seen looting, attacking police and locals, and performing Nazi salutes in the street. As the mobs chanted ‘send them home’ and ‘Islam out,’ they also destroyed mosques, libraries, and graffitied racial slurs on homes.”
Ahead of the upcoming US presidential election and 2025 Canadian federal election, there is worry that online disinformation will pose a serious, even “unprecedented” threat, which could lead to harassment, intimidation, and even violence.
The next war
Governments are trying to extract funds from well-heeled tech platforms and struggling to keep media outlets afloat while fighting to displace misinformation and disinformation with more reliable sources of news. But track records are spotty, and the future is uncertain.
In some ways, the media funding battle is the “last war,” says Bade, and a new struggle is increasingly emerging over artificial intelligence and the “bigger threat” of content stripping.
“If media companies are going to have collective negotiation with tech companies,” Bade says, “it probably should be over that.”
Hard Numbers: Startups are up, Google gas, Brazil dings Meta, Slow and steady
27.1 billion: From April to June, investors poured $27.1 billion into US-based artificial intelligence startups, according to PitchBook. That’s nearly half of the $56 billion that all American startups raised during that time. Startup investment is up 57% year over year — something for which the AI industry can claim lots of credit.
48: Google’s greenhouse gas emissions are up a whopping 48% since 2019, thanks in no small part to its investments in AI. In the tech giant’s annual environmental report, it chalked up the increase to “increased data center energy consumption and supply chain emissions.” It previously set a goal to reach net-zero emissions by 2030 and now says that’s “extremely ambitious” given the state of the industry. Many AI firms are struggling to meet voluntary emissions goals due to the massive energy demands of training and running models.
9,000: The Brazilian government on Tuesday ordered Meta to stop training its AI models on citizens’ data. The penalty? A fine of 50,000 Reals (about $9,000). The government gave Meta five days to amend its privacy policy and data practices, citing the “fundamental rights” of Brazilians.
75: Bipartisan consensus is hard to come by these days. But in a recent survey of US voters, conducted by the AI Policy Institute, 75% of Democrats and 75% of Republicans said it’s preferable that AI development is slow and steady as opposed to the US racing ahead to gain a strategic advantage over China and other foreign adversaries.
AI is changing the fine print on your favorite services
More recently, Adobe faced public outrage when devoted users read into ambiguities in its new privacy policy. The company changed its terms of use earlier this month, noting that it “may access [user] content through both automated and manual methods,” including machine learning. Adobe wrote a blog post clarifying that it’s not peering into NDA-protected Photoshop projects, but rather describing the way it uses AI to monitor its ecosystem for illegal content such as child sexual abuse material.
There’s an old truism in tech, “If you’re not paying for it, you’re the product.” Well, Adobe’s products aren’t cheap, so, let’s rework this. How about: “If you’re using it, you’ve become AI training data.” Oh, and if you’re concerned about privacy, you should always read the fine print.