Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Meta, the parent company of Facebook and Instagram, has prided itself on releasing innovative open-source models as an alternative to the proprietary — or closed-source — models of OpenAI, Anthropic, and other leading AI developers. Now, it claims one of its newest models can evaluate other AI models. (That really is meta.)
Researchers at Meta’s Fundamental AI Research – yep, they call it their FAIR team – detailed their work on what they’re calling a “self-taught evaluator” in an August white paper ahead of the new model’s launch. The researchers sought to train an AI to evaluate models based not on human preference but on synthetic data. In short, Meta is trying to develop an AI model that can evaluate and improve itself without reliance on humans.
This could push AI to a place where it can sense its own imperfections and improve without being told to do so — a greater level of autonomy. Dystopian? Maybe.