Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
U.S. President Donald Trump hosts his first cabinet meeting with Elon Musk in attendance, in Washington, D.C., U.S., on Feb. 26, 2025.
What happens when you ask artificial intelligence to create a video of gilded Trump statues (straight out ofTurkmenistan) and new Trump Hotels (straight out of Atlantic City) featuring an up-tempo, pro-Trump track (straight from the J6 Prison Choir’s club remix album)? You get the US president’s Truth Social post advertising his postwar Gaza proposal, of course.
While Donald Trump’s rhetoric on redeveloping Gaza has been absent from headlines recently, this AI music video troll serves as a win-win-win for him: It reinvigorates his base, enrages his opposition, and leaves his true intentions up for debate.
What isn’t up for debate? The included belly dancers with female bodies and bearded male heads wouldn’t appreciate his slew of executive orders on the strict gender binary. Don’t forget to always double-check your AI outputs …
A federal district court judge in Delaware issued the first major ruling on whether using copyrighted materials to train artificial intelligence systems constitutes copyright infringement.
On Feb. 11, Judge Stephanos Bibas granted a summary judgment to Thomson Reuters, which makes the legal research service Westlaw, against a company named Ross Intelligence. The judge found that Ross infringed on Reuters’ copyrights by using Westlaw headnotes — essentially case summaries — to train its own legal research AI.
There are fair uses of copyrighted materials under federal law, but they generally need to be “transformative” in nature. Evidently, Ross flew too close to the sun in using Westlaw summaries to train what could be a commercial rival.
For Matthew Sag, an Emory University law professor who studies artificial intelligence and machine learning, this was somewhat of a surprise. Sag criticized the lack of explanation in the judge’s ruling and said he believed the reach would only be limited.
“It seems to me that the most important factor in the court’s decision was the fact that the defendant trained its model on Westlaw’s data in order to produce something that would perform almost exactly the same function as Westlaw,” Sag said. “That makes it quite different to a generative AI model trained on half of the Internet that has a broad range of capabilities and is not designed to specifically replace any one input or any one source of inputs.”
Robert Brauneis, a law professor and co-director of the Intellectual Property Program at the George Washington University Law School, said the judge’s ruling weakens the case for that argument for generative AI developers, who could be seen as competing directly with the artists they’re allegedly copying. “Generative AI developers are using the copyrighted works of writers, artists, and musicians to build a service that would compete with those artists, writers, and musicians,” he said.
Major litigation over generative AI is still working its way through the courts — notably in the New York Times Company’s lawsuit against OpenAI, and class action suits by artists and writers against nearly every major AI firm. “These cases are still relatively early, and there is a lot of civil procedure to get through — fights over discovery, class action certification, venue — before we get to interesting questions of copyright law,” Sag said. “We are still a long way from a definitive judicial resolution of the basic copyright issue.”
An image of a firefly from Adobe Firefly.
A floppy-eared, brown-eyed beagle turns her head. A sunbeam shines through the driver’s side window. The dog is outfitted in the finest wide-brimmed sun hat, which fits perfectly atop her little head.
If this hat-wearing dog weren’t a clue, I’m describing an AI video. There are other hints too: If you look closely, the dog is sitting snugly between two black-leather seats, which are way too close together. Outside, cornfields and mountains start to blur, and the road contorts behind the car.
Despite these problems, this is still one of the better text-to-video generation models I’ve encountered. And it’s not from a major AI startup, but rather from Adobe, the company behind PhotoShop.
Adobe first released its AI model, Firefly, for image generation in March 2023 and followed it up this month with a video generator, which is still in beta. (You can try out the program for free, but we paid $10 after quickly hitting a limit of how many videos we could generate.)
Firefly’s selling point isn’t just that it makes high-quality video clips or that it integrates with the rest of the Adobe Creative Cloud. Adobe also promises that its AI tools are all extremely copyright-safe. “As part of Adobe’s effort to design Firefly to be commercially safe, we are training our initial commercial Firefly model on Adobe Stock images, openly licensed content, and public domain content where copyright has expired,” the company writes on its website.
In the past, Adobe has also offered to pay the legal bills of any enterprise user of Firefly’s image model that is sued for copyright violations — “as a proof point that we stand behind the commercial safety and readiness of these features,” Adobe’s Claude Alexandre said in 2023. (It’s unclear if any users have taken the company up on the offer.)
eMarketer’s Gadjo Sevilla said that Adobe has a clear selling point amid a fresh crop of video tools such as OpenAI, ByteDance, and Luma: its copyright promises. “Major brands like Dentsu, Gatorade, and Stagwell are already testing Firefly, signaling wider enterprise adoption,” Sevilla said. “Making IP-safe AI available in industry-standard tools can help Firefly, and by extension Adobe, gain widespread adoption in copyright-friendly AI image generation.”
But Adobe’s track record isn’t spotless. The company had a mea culpa last year after AI images from rival Midjourney were found in Firefly’s training set, according to Bloomberg, likely submitted to the Adobe Stock program and slid past content moderation guardrails.
Firefly’s video model is still new, so public testing will bear out how well it’s received and what exactly users get it to spit out. For our trial, we asked for “an extreme close-up of a flower” and selected settings for an aerial shot and an extreme close-up.
We also asked Firefly to show us President Donald Trump waving to a crowd. It wouldn’t show us Trump because of content rules around politics but gave us some other guy.
And, of course, we asked to see if Mickey Mouse — who is at least partly in the public domain — could ride a bicycle. At least on that front, it’s copyright-safe. You’re welcome, Disney.
When compared to OpenAI’s Sora video generator, Firefly takes longer (about 30 seconds vs. 15 for Sora) and is not quite as polished. But if I get into trouble using Adobe’s products, well, at least a quick call to their general counsel’s office should solve my problems.
Security cameras representing surveillance.
On Friday, OpenAI announced that it had uncovered a Chinese AI surveillance tool. The tool, which OpenAI called Peer Review, was developed to gather real-time data on anti-Chinese posts on social media.
The program wasn’t built on OpenAI software, but rather on Meta’s open-source Llama model; but OpenAI discovered it because the developers used the company’s tools to “debug” code, which tripped its sensors.
OpenAI also found another project, nicknamed Sponsored Discontent, that used OpenAI tech to generate English-language social media posts that criticized Chinese dissidents. This group was also translating its messages into Spanish and distributing them across social media platforms targeting people in Latin America with messages critical of the United States. Lastly, OpenAI’s research team said it found a Cambodian “pig butchering” operation, a type of romance scam targeting vulnerable men and getting them to invest significant amounts of money in various schemes.
With the federal government instituting cuts on AI safety, law enforcement, and national security efforts, the onus for discovering such AI scams and operations will increasingly fall to private companies like OpenAI to self-regulate but also self-report what it finds.
Then Republican presidential candidate Donald Trump gestures and declares "You're fired!" at a rally in New Hampshire in 2015.
Sweeping cuts are expected to come to the US National Institute of Standards and Technology, or NIST, the federal lab housed within the Department of Commerce. NIST oversees, among other things, chips and artificial intelligence technology. The Trump administration is reportedly preparing to terminate as many as 500 of NIST’s probationary employees.
It’s unclear when the firings will hit, but it’s been mere weeks since Trump repealed Biden’s 2023 sweeping executive order on AI. In that order, the Biden administration had entrusted NIST with managing semiconductor manufacturing funds and establishing safety standards for AI development and use.
It also oversees the US Artificial Intelligence Safety Institute, the initiative in charge of testing advanced AI systems for safety and security, as well as setting standards for the safe development of AI. Since the institute is still nascent — established in 2023 — it could be especially vulnerable to across-the-board cuts to probationary staff.
President Joe Biden signs an executive order about artificial intelligence as Vice President Kamala Harris looks on at the White House on Oct. 30, 2023.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
Capitol Hill, Washington, D.C.
That display came after Meta and Google publicly criticized Europe’s new code of practice for general AI models, part of the EU’s AI Act earlier this month. Meta’s Joel Kaplan said that the rules impose “unworkable and technically infeasible requirements” on developers, while Google’s Kent Walker called them a “step in the wrong direction.”
On Feb. 11, US Vice President JD Vance told attendees at the AI Action Summit in Paris, France, that Europe should pursue regulations that don’t “strangle” the AI industry.
The overseas criticism from Washington and Silicon Valley may be having an impact. The European Commission recently withdrew its planned AI Liability Directive, designed to make tech companies pay for the harm caused by their AI systems. European official Henna Virkkunen said that the Commission is softening its rules not because of pressure from US officials, but rather to spur innovation and investment in Europe.
But these days, Washington and Silicon Valley are often speaking with the same voice.