Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Amazon is set to announce its newest AI model
The multimodal model, codenamed Olympus, can reportedly search video archives using text commands. It’s unclear whether it can generate video content like OpenAI’s Sora or Meta’s Movie Gen, text-to-video models that are still not broadly released to the public.
But the new model is a sign not only of Amazon’s internal ambitions but also its potentially decreasing reliance on a key investment: Anthropic, the maker of the chatbot Claude.
On Nov. 22, Amazon announced it’s investing another $4 billion into Anthropic, doubling its total investment to $8 billion. In exchange, Anthropic would commit to using Amazon’s Tranium series of chips, Amazon’s “moonshot” attempt to rival Nvidia, the world’s leading AI chip designer.
Amazon already has an enterprise chatbot called Q, as well as AI business solutions for companies through its Amazon Web Services cloud offerings. Olympus could be announced as soon as the annual AWS re:Invent conference being held this week in Las Vegas, Nevada. Matt Garman, who took over as CEO of AWS in May, will address conference-goers on Tuesday and disclose “real, needle-moving changes” on AI.
If Olympus is indeed a business-to-business offering from AWS, then perhaps Anthropic’s Claude will continue being Amazon’s consumer-facing bet while it focuses on the more lucrative work of selling to other companies.
OpenAI’s little new model
OpenAI is going mini. On July 18, the company behind ChatGPT announced GPT-4o mini, its latest model. It’s meant to be a cheaper, faster, and less energy intensive version of the technology. The smaller model is marketed to developers who rely on OpenAI’s language models and want to save money.
The move also comes as AI companies are trying to cut their own costs, reduce their energy dependence, and answer calls from critics and regulators to lower their energy burden. Training and running AI often requires access to electricity-guzzling data centers, which in turn require copious amounts of water to keep them from overheating.
Moving forward, look for AI companies to offer a multitude of options to cost-conscious and energy-conscious users.
To see where data centers have cropped up in North America, check out our latest Graphic Truth here.