Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Could Microsoft buy TikTok?, Get me the Operator, Meta and ByteDance spend on AI, ElevenLabs’ billions, Ready for “Humanity’s Last Exam”?
2020: Microsoft is in talks to acquire TikTok, according to President Donald Trump. If that rings a bell it’s because Microsoft sought to buy the social media app in 2020, the last time Trump tried to ban the app. The deal fell through, and Microsoft CEO Satya Nadella later called the attempted TikTok takeover the “strangest thing I've ever worked on.” This time around, all the company has said on the matter is that it “has nothing to share at this time.” Meanwhile, Trump has also nodded to there being “great interest in TikTok” from several companies.
200: OpenAI announced Operator, its AI “agent,” in an experimental “research preview,” on Thursday. The point is that this model can not only chat with you but can actually perform tasks for you, like booking a restaurant reservation or ordering food for delivery. It’s currently available to subscribers of ChatGPT Pro, a $200-a-month subscription.
65 billion: Meta said Friday it expects to spend up to $65 billion in 2025, up from $40 billion in 2024, to fuel its growing AI ambitions. Meanwhile, TikTok’s Chinese parent company ByteDance has reportedly earmarked $21 billion, including $12 billion on AI infrastructure.
3 billion: The AI voice-cloning company ElevenLabs has raised a new $250 million funding round announced Friday that values it at around $3 billion. We tried out ElevenLabs’ software last year to clone our author’s voice and translate it into different languages.
3,000: Researchers at the Center for AI Safety and Scale AI released “Humanity’s Last Exam” on Thursday, a 3,000-question multiple-choice and short-answer test designed to evaluate AI models’ capabilities. With AI models succeeding at most existing tests, the researchers strived to create one that will be able to stump most — or at least show when they’ve become truly superintelligent. For now, they’re struggling: All of the current top models fail the exam with OpenAI’s o1 model scoring the highest at 8.3%.Warning: Your AI data might be poisoned
Generative AI models are susceptible to a kind of cyberattack called “data poisoning,” whereby malicious actors intentionally manipulate known source material to change the model’s understanding of an issue. It’s like a high-tech version of giving a school rival a fake exam answer key.
Researchers say that concerns of data poisoning are mostly hypothetical at this point, but showed in a new report how Wikipedia entries could be edited at strategic times to ensure the incorrect information is captured by models scraping the online encyclopedia. It’s an early warning to AI companies and those who depend on it that attackers could soon find creative ways to target the most powerful models and exploit vulnerabilities.
Data poisoning isn’t all bad: Some copyright holders are using a form of data poisoning as a defensive mechanism to prevent AI models from gobbling up their creative works. One program called Nightshade was developed to distort an image when it’s ingested by a large language model.