On Friday, OpenAI announced that it had uncovered a Chinese AI surveillance tool. The tool, which OpenAI called Peer Review, was developed to gather real-time data on anti-Chinese posts on social media.
The program wasn’t built on OpenAI software, but rather on Meta’s open-source Llama model; but OpenAI discovered it because the developers used the company’s tools to “debug” code, which tripped its sensors.
OpenAI also found another project, nicknamed Sponsored Discontent, that used OpenAI tech to generate English-language social media posts that criticized Chinese dissidents. This group was also translating its messages into Spanish and distributing them across social media platforms targeting people in Latin America with messages critical of the United States. Lastly, OpenAI’s research team said it found a Cambodian “pig butchering” operation, a type of romance scam targeting vulnerable men and getting them to invest significant amounts of money in various schemes.
With the federal government instituting cuts on AI safety, law enforcement, and national security efforts, the onus for discovering such AI scams and operations will increasingly fall to private companies like OpenAI to self-regulate but also self-report what it finds.