Meta, the parent company of Facebook and Instagram, has prided itself on releasing innovative open-source models as an alternative to the proprietary — or closed-source — models of OpenAI, Anthropic, and other leading AI developers. Now, it claims one of its newest models can evaluate other AI models. (That really is meta.)
Researchers at Meta’s Fundamental AI Research – yep, they call it their FAIR team – detailed their work on what they’re calling a “self-taught evaluator” in an August white paper ahead of the new model’s launch. The researchers sought to train an AI to evaluate models based not on human preference but on synthetic data. In short, Meta is trying to develop an AI model that can evaluate and improve itself without reliance on humans.
This could push AI to a place where it can sense its own imperfections and improve without being told to do so — a greater level of autonomy. Dystopian? Maybe.