Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Call to crack down on terrorist content
OpenAI and Anthropic, two of AI’s biggest startups, signed on to the Christchurch Call to Action at a summit in Paris on Friday, pledging to suppress terrorist content. The perpetrator of the Christchurch shooting was reportedly radicalized by far-right content on Facebook and YouTube, and he livestreamed the attack on Facebook.
While the companies have agreed to “regular and transparent public reporting” about their efforts, the commitment is voluntary — meaning they won’t face real consequences for any failures to comply. Still, it’s a strong signal that the battle against online extremism, which started with social media companies, is now coming for AI companies.
Under US law, internet companies are generally protected from legal liability under Section 230 of the Communications Decency Act. The issue was deflected by the Supreme Court last year in two terrorism-related cases, with the Justices ruling that the plaintiffs didn’t have standing to sue Google and Twitter under US anti-terrorism laws. But there’s a rich debate brewing as to whether Section 230 protects AI chatbots like ChatGPT, a question that’s bound to wind up in court. Sen. Ron Wyden, one of the authors of Section 230, has called AI “unchartered territory” for the law.How tech companies aim to make AI more ethical and responsible
Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.
Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.
And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Why Big Tech companies are like “digital nation states” ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- The UN will discuss AI rules at this week's General Assembly ›
- The AI power paradox: Rules for AI's power ›
- How should artificial intelligence be governed? ›
Hearing the Christchurch Call
After a terrorist attack on a mosque in Christchurch, New Zealand, was live-streamed on the internet in 2019, the Christchurch Call was launched to counter the increasing weaponization of the internet and to ensure that emerging tech is harnessed for good.
Since its inception, the Christchurch Call has evolved to include more than 120 government and private sector stakeholders. The organization, pioneered by the French and New Zealand governments, will hold its next major summit at the Paris Peace Forum in November.
Dame Jacinda Ardern, former Prime Minister of New Zealand who led the response to the Christchurch attack; Ian Bremmer, president and founder of Eurasia Group and GZERO Media; and Brad Smith, vice chair and president of Microsoft sat down with CNN’s Rahel Solomon for a Global Stage livestream on the sidelines of the UN General Assembly in New York. The event was hosted by GZERO Media in partnership with Microsoft.
Reflecting on the catastrophic attack that prompted the formation of the Call and its mission, Dame Ardern recalled how, on that day, ”I reached for my phone to be able to share that message on a social media platform, I saw the live stream.” She notes how that became a galvanizing moment: In the “aftermath of that period, we were absolutely determined … we had the attention of social media platforms in particular to do something that would try and prevent any other nation from having that experience again.”
That led to the formation of the organization in a mere eight-week period, Ardern said. But identifying hate speech and extremism online that can fuel violence is no small feat, Ardern acknowledges, adding that while the goal can indeed appear “lofty,” the group’s focus is on “setting expectations” around what should and shouldn’t be tolerated online.
But what did tech companies learn from the Christchurch experience about their own roles in moderating content, overseeing algorithms, and mitigating potential radicalization and violence?
One major development that came out of the Christchurch Call, Smith notes, is what’s known as a content incident protocol. “Basically, you have the tech companies and governments and others literally on call like doctors being summoned to the emergency room at tech companies and in governments so that the moment there is such a shooting, everybody immediately is alerted.”
Emerging technologies – most notably artificial intelligence – mean that the Christchurch Call must remain nimble in the face of new threats. Still, Ardern says that’s not necessarily a bad thing because AI presents both challenges and opportunities for the organization. “On the one hand we may see an additional contribution from AI to our ability to better manage content moderation that may be an upside,” she says. But “a downside,” she notes, “is that we may see it continue to contribute to or expand on some of the disinformation which contributes to radicalization.”
Bremmer shared this view of AI, calling it both “a tool of extraordinary productivity and growth, indeed globalization 2.0,” while also acknowledging the threat of disinformation proliferation: “Fundamental to a democratic society, an open society, a civil society, fundamental to human rights and the United Nations Charter is the idea that people are able to exchange information that they know is true, that they know is real,” he says.
Four years after the Christchurch attack, there is indeed a sense of urgency surrounding the need for governments to better understand emerging technologies and their powers over politics and society. “Governments understand that this is systemic, it is transformative, and they're not ready,” Bremmer says, adding that “they don't have the expertise, they don't have the resources, and we don't yet have the architecture … we're late!”
Watch our livestream from the UN General Assembly
WATCH LIVE: A deadly terrorist attack in New Zealand was livestreamed in 2019, horrifying the world. The result was an international movement to end extremism and hate online. Join us live today at 11 am ET to learn about the Christchurch Call to Action, how it can create a safer and more secure world, and what global collaboration will look like in the AI era.
CNN's Rahel Solomon will moderate our livestream conversation during the 78th UN General Assembly, with Dame Jacinda Ardern, former prime minister of New Zealand and Special Envoy for the Christchurch Call; Ian Bremmer, president of Eurasia Group and GZERO Media; and Brad Smith, Vice Chair and President, Microsoft.
Hearing the Christchurch Call: Collaboration in the Age of AI
Wednesday, September 20th | 11:00 am -12:00 pm ET
Participants:
- Dame Jacinda Ardern, Former Prime Minister of New Zealand and Special Envoy for the Christchurch Call
- Brad Smith, Vice Chair and President, Microsoft
- Ian Bremmer, President & Founder, Eurasia Group & GZERO Media
- Rahel Solomon, CNN (moderator)
Report into New Zealand mosque attack focus on Islamist terror risks, firearms licensing
There were no failings within government agencies that would have alerted them to the imminent attack.
New Zealand mosque shooter arrives in Christchurch for sentencing
SYDNEY (REUTERS) - The suspected white supremacist who killed 51 Muslim worshippers last year, a massacre that prompted a global campaign to stamp out online hate, arrived in Christchurch on Sunday (Aug 23) ahead of sentencing hearings.
New Zealand mosque shooter sentencing begins on Aug 24
WELLINGTON (REUTERS) - The sentencing hearing for an Australian man accused of killing 51 Muslim worshippers in New Zealand's worst mass shooting has been set to begin on Aug 24, the court said on Friday (July 3).
New Zealand media sets rules for mosque shooting trial
WELLINGTON • New Zealand's major media outlets have vowed to prevent the man charged with the Christchurch mosque shooting from using his trial as a platform for extremist propaganda.