Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Search engines sing the AI blues
News companies have been split in dealing with AI. Some, like the New York Times, are suing AI firms over copyright violations, while others, like the Wall Street Journal, are striking deals. But most of the attention has been on OpenAI, maker of ChatGPT, and the biggest name in the space. This week, consternation brewed over how Perplexity, a so-called AI search engine, is using news articles without permission.
The company recently debuted a feature called Perplexity Pages, which gives news about a specific topic. But Forbes reported that the results are almost carbon-copied from journalistic outlets with limited attribution. The outlets aren’t named but linked in “small, easy-to-miss logos” in the article.
One deeply reported piece by Forbes about Google co-founder Eric Schmidt’s stealth drone project was aggregated with limited attribution and got nearly 18,000 views on Perplexity’s site. The same thing happened with a piece on TikTok and hacking.
“This is investigative reporting, sourced painstakingly from whistleblowing company insiders,” Forbes reporter Emily Baker-Whitewrote on X. “AI can't do that kind of work, and if we want people who do, this can't be allowed to happen.”
Perplexity CEO Aravind Srinivasresponded to Forbes saying that the product still has “rough edges” and said it’ll be improved.
Meanwhile, a Wired reporter found that Google’s AI Overviews drew heavily from his original reporting with minimal changes. No one has yet filed suit, but if they do, a court could decide whether this is a copyright violation or protected under the principle of fair use.Newspapers fight back, file suit against OpenAI
Eight major newspapers owned by Alden Global Capital sued ChatGPT maker OpenAI on Tuesday in federal court, alleging copyright infringement. The group includes major regional newspapers, such as the New York Daily News, the Chicago Tribune, and the Orlando Sentinel.
While many publishers have struck lucrative licensing deals with AI companies, allowing them to train their large language models by scraping their websites, others have opted for litigation, most notably the New York Times. The Grey Lady sued OpenAI in December, alleging the Sam Altman-led startup violated federal copyright law by illegally training its model on Times journalism and spitting out responses indistinguishable from what people can find on their website or in their print newspaper. OpenAI has said the suit is “without merit.”
The Alden group also followed the Times' lead in suing Microsoft, too. Microsoft is OpenAI's biggest investor, having poured $13 billion into the startup, and uses the GPT suite of language models in its Copilot assistant and Bing search products.If a large language model proprietor is found to have violated copyright statutes, it could pose an existential threat to that model — meaning it may have to start training a new one from scratch.
Bad-behaving bots: Copyright Office to the rescue?
It might not be the flashiest agency in Washington, DC, but the Copyright Office, part of the Library of Congress, could be key to shaping the future of generative AI and the public policy that governs it.
The New York Times reports that tech companies – no strangers to spending big bucks on lobbying – are lining up to meet with Shira Perlmutter, who leads the Copyright Office as Register of Copyrights. Music and news industry representatives have requested meetings too. And Perlmutter’s staff is inundated with artists and industry executives asking to speak at public “listening sessions.”
Copyright is central to the future of generative AI. Artists, writers, record labels, and news organizations have all sued AI companies, including Anthropic, Meta, Microsoft, and OpenAI, claiming they have broken federal copyright law by training their models on copyrighted works and, often, spit out results that rip off or replicate those same works.
The Copyright Office is set to release three reports detailing its position on the issue this year, which the Times says will be “hugely consequential” as lawmakers and courts grapple with nuanced questions about how intellectual property law jibes with the cutting-edge technology.
Antitrust regulators zero in on AI
The watchful eyes of US antitrust enforcers are squarely on the artificial intelligence industry.
Last week, the US Federal Trade Commission announced it was opening an inquiry into multibillion-dollar investments by tech giants into smaller AI startups. Amazon, Google, and Microsoft made investments in Anthropic and OpenAI, and while they didn’t buy them outright, that has not stopped federal antitrust regulators from flexing some muscle.
Microsoft poured $13 billion into OpenAI, the company that ushered in the start of the AI boom with the release of its chatbot ChatGPT in November 2022, and the FTC is also probing two separate investments into Anthropic, which makes the AI-powered chatbot Claude, by Amazon ($4 billion) and Google ($2 billion).
It’s possible that in a more hands-off regulatory environment, these Silicon Valley stalwarts would have simply bought the pure-play startups outright. But doing so these days would enlarge the targets already on their chests.
The US government’s commitment to busting corporate dealmaking in the internet sector has been spotty over the past two decades. The historical rate at which the government challenges mergers is “far, far lower in the digital sector,” says Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. This is research she oversaw and testified about to Congress in her previous role as the president of the American Antitrust Institute.
Federal antitrust enforcement is now led by FTC chair Lina Khan, a longtime critic of Big Tech dating back to her days as a student at Yale Law School, and the Department of Justice’s antitrust chief Jonathan Kanter, who spent his final years in private practice in part representing smaller tech firms in lawsuits against Apple and Google. In the first few years of their tag-team tenure, Khan and Kanter have sued Google for abusing its monopoly in advertising, sued Amazon for anticompetitive behavior in the online retail market, and unsuccessfully sued Meta to block its acquisition of the VR firm Within. Khan scored a big win in December when a federal court upheld the agency’s decision to block a $7.1 billion biotech merger, and several tech companies including Adobe and Figma have terminated merger plans after meeting with antitrust regulators. Still, it could take years for Khan and Kanter to notch their first major victory over Big Tech.
In a recent speech at Stanford University, Khan said the government wouldn’t turn a blind eye to anti-competitive dealmaking in the AI space, noting that the FTC “will be clear-eyed in ensuring that claims of innovation are not used as cover for lawbreaking.”
Brian Albrecht, chief economist for the International Center for Law & Economics, said there’s no question that Khan “believes there was too little scrutiny on previous tech acquisitions and wants to get ahead.” He says she’s been overeager with a “desire to bring any tech case, instead of good cases” (such as the Meta-Within case). Still, while the FTC hasn’t yet brought a case against these AI investments, Albrecht says it “has a flavor of ‘we need to do something, and this is something.’”
The FTC inquiry is just that — merely an inquiry. The agency hasn’t yet launched a formal investigation into any of these deals, which would be a necessary step before it decides whether to bring lawsuits. In fact, recent reports indicate that the FTC and DOJ both want to investigate Microsoft’s stake in OpenAI but can’t agree over who’ll get to do it.
But it’s a warning shot, a declaration of intent, a resolution that the investment-not-acquisition strategy — if that’s the strategy after all — will not go unnoticed.
Investments, not acquisitions
Antitrust regulators have broad authority over partial-ownership investments, not just full-on corporate takeovers. That’s important, Moss says, because her research shows that the percentage of investments in AI over the past three decades is about three times higher than that of acquisitions involving AI. “That tells you a lot about how companies are approaching AI,” she says.
Microsoft’s arrangement with OpenAI is somewhat stranger than the others because while it’s invested an astronomical sum in the ChatGPT maker, OpenAI is technically run by a nonprofit. Until recently, Microsoft didn’t even occupy a seat on that nonprofit’s board! But when the board dismissed OpenAI CEO Sam Altman in November, Microsoft’s power was hard to ignore. Microsoft promised to hire all of the 700 employees threatening to leave OpenAI over the ouster, successfully lobbied for Altman’s reinstallation, and won a (nonvoting) board seat in the aftermath.
“The arrangement does not get some sort of special immunity because it isn't a standard investment,” Albrecht says. “That being said, investments, joint ventures, strategic partnerships have often (and should) received more leniency from the agencies.”
And even though OpenAI is run by a nonprofit, that doesn’t obviate the need for antitrust enforcement. “The exercise of market power affects prices, quality, and innovation similarly in the case of for-profit and nonprofit organizations,” Moss says, noting that many universities and hospitals have nonprofit status and have received antitrust scrutiny.
The UK’s Competition and Markets Authority is already investigating Microsoft’s investment in OpenAI, and Microsoft has defended itself by pointing to the odd nature of its investment. Instead of buying equity in OpenAI, Microsoft receives half of the startup’s revenues until the $13 billion investment is repaid, according to the Los Angeles Times.
A new era for antitrust
In the past few decades, Silicon Valley technology companies have become the most valuable firms in the world. Seven of the top nine most valuable firms in the world are tech companies with AI investments (Amazon, Apple, Google, Meta, Microsoft) or chip manufacturers (NVIDIA and TSMC), all of which have massive direct or indirect interests in the success of AI.
Many critics of these Big Tech firms say they have grown bloated and unruly without proper antitrust enforcement to keep them from gobbling up competitors. That seems to be the view of Khan and Kanter, too — plus, many overseas antitrust regulators who could make life uncomfortable for any of these global companies.
And these companies know that.
It’s hard to know whether in another time, facing different scrutiny, Microsoft might have tried to buy OpenAI. Or if Amazon or Google would’ve made an offer to buy Anthropic.
“The current state is that any Big Tech company has to worry about the FTC for any major investment or business decision they make,” Albrecht says. “That makes investments relatively more attractive than acquisitions.”
But this inquiry might reveal that the gap, he says, isn’t as big as the companies in question — some of the biggest AI firms in the world — might wish.
Is ChatGPT stealing from The New York Times?
We told you 2024 would be the year of “copyright clarity,” and while some legal disputes were already winding their way through the US courts, a whopper dropped on Dec. 31.
Just hours before the Big Apple’s ball dropped, The New York Times filed a lawsuit against the buzziest AI startup in the world, OpenAI, and its lead investor, Microsoft.
In its 69-page complaint filed in federal court in Manhattan, The New York Times alleged that OpenAI illegally trained its large language models on the Gray Lady’s copyrighted stories. It claims that OpenAI violated its copyright when it ingested the stories and that it continues to do so repeatedly with the information it spits out.
The copying was so brazen, the lawsuit says, that the AI products powered by OpenAI’s large language model, GPT-4, can replicate full — or nearly full — versions of Times articles if prompted, undermining the paper’s subscription business. That includes OpenAI’s popular chatbot ChatGPT, as well as Microsoft’s Bing Chat and Copilot products.
What the Times has to prove
Lawyers for the Times need to first demonstrate that the paper has a valid copyright and, second, that the defendants violated it.
“Facts aren’t copyrightable,” says Kristelia Garcia, an intellectual property law professor at Georgetown University, noting that while an organization’s exact wording in covering a news event is copyrightable, the underlying event it's covering is not. Additionally, “there is a fair use exception for ‘newsworthy’ use of copyrighted work,” she says, a tenet that affords protection to anyone reporting the news.
The fair use doctrine – the main legal principle in question – is what allows you to parody a popular song or quote a novel in a critical review. Generally, the courts have ruled that to qualify as fair use, a work must be “transformative” and not compete commercially against the original work.
In the suit, the Times says that there’s nothing transformative about how OpenAI and Microsoft are using Times stories. Instead, it claims that the “GenAI models compete with and closely mimic the inputs used to train them,” and that “they owe the Times “billions of dollars in statutory and actual damages.”
The view from OpenAI
OpenAI, which had been engaged in deep discussions over the matter with the Times, was caught off guard by the legal move.
“Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development,” it said in a statement after the lawsuit was filed, noting subsequently that the lawsuit was "without merit."
The company has been riding the success of its industry-standard AI tools, chiefly the chatbot ChatGPT, toward an anticipated valuation north of $100 billion, and many users are excited about the much-hyped launch of GPT-5.
But copyright law is one snag threatening to upend OpenAI’s skyward business, and Sam Altman knows it. That’s why he and his colleagues have already started paying media companies for the right to license their content. According to recent reports, payments in the $1-5 million range annually — not the “billions” that the Times says it’s owed – are being offered to media outlets by OpenAI.
AI firms have already been hit with copyright suits from famous authors and artists over their efforts to train their models to be stylistically similar to them, but the Times lawsuit goes further, alleging straight-up copying in the input and output.
What’s likely to come next?
The New York Times was able to effectively manipulate ChatGPT to spit out its articles nearly verbatim: In its brief, it shows that it asked the chatbot to deliver a Times story one paragraph at a time.
When we at GZERO tried this, the chatbot no longer accepted this method, telling us: “I apologize for any inconvenience, but I can't provide verbatim copyrighted text from The New York Times or any other external source.” But it also said, “I can offer a brief summary or answer questions related to the article's content.” It’s unclear whether OpenAI made a change in response to the lawsuit.
Garcia thinks that the Times has a good case as long as it can demonstrate that “OpenAI ingested Article X and then spit out Article Y that shared 500 to 650 identical words.” But, ultimately, she said she’d be surprised if the case ever goes to trial — a process that would take years.
It’s much more likely, she thinks, that the Times is seeking a substantial settlement that pays what it sees as fair value for its journalism.
An adverse decision in court could be a deep threat to the AI business model as a whole — if a judge deems that the training process infringes on copyright, it could change the trajectory of this innovative new technology.