Firas Abdullah/ABACAPRESS.COM via Reuters
After a gunman murdered 51 people in a New Zealand mosque in 2019, streaming the massacre on social media, then-Prime Minister Jacinda Ardern and French President Emmanuel Macron brought government leaders and technology companies together, asking them to crack down on online extremism. Now, AI companies are getting in on the act.
OpenAI and Anthropic, two of AI’s biggest startups, signed on to the Christchurch Call to Action at a summit in Paris on Friday, pledging to suppress terrorist content. The perpetrator of the Christchurch shooting was reportedly radicalized by far-right content on Facebook and YouTube, and he livestreamed the attack on Facebook.
While the companies have agreed to “regular and transparent public reporting” about their efforts, the commitment is voluntary — meaning they won’t face real consequences for any failures to comply. Still, it’s a strong signal that the battle against online extremism, which started with social media companies, is now coming for AI companies.
Under US law, internet companies are generally protected from legal liability under Section 230 of the Communications Decency Act. The issue was deflected by the Supreme Court last year in two terrorism-related cases, with the Justices ruling that the plaintiffs didn’t have standing to sue Google and Twitter under US anti-terrorism laws. But there’s a rich debate brewing as to whether Section 230 protects AI chatbots like ChatGPT, a question that’s bound to wind up in court. Sen. Ron Wyden, one of the authors of Section 230, has called AI “unchartered territory” for the law.