Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Section 230 won’t be a savior for Generative AI
In the US, Section 230 of the Communications Decency Act has been called the law that “created the internet.” It provides legal liability protections to internet companies that host third-party speech, such as social media platforms that rely on user-generated content or news websites with comment sections. Essentially, it prevents companies like Meta or X from being on the hook when their users defame one another, or commit certain other civil wrongs, on their site.
In recent years, 230 has become a lightning rod for critics on both sides of the political aisle seeking to punish Big Tech for perceived bad behavior.
But Section 230 likely does not apply to generative AI services like ChatGPT or Claude. While this is still untested in the US courts, many legal experts believe that the output of such chatbots is first-party speech, meaning someone could reasonably sue a company like OpenAI or Anthropic over output, especially if it plays fast and loose with the truth.
Supreme Court Justice Neil Gorsuch suggested during oral arguments last year that AI chatbots would not be protected by Section 230. “Artificial intelligence generates poetry,” Gorsuch said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”
Without those protections, University of North Carolina professor Matt Perault noted in an essay in Lawfare, the companies behind LLMs are in a “compliance minefield.” They might be forced to dramatically narrow the scope and scale of how their products work if any “company that deploys [a large language model] can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk.”
We’ve already seen similar forces at play in the court of public opinion. Facing criticism around political misinformation, racist images, and deepfakes of politicians, many generative AI companies have limited what their programs are willing to generate – in some cases, outlawing political or controversial content entirely.
Lawyer Jess Miers of the industry trade group Chamber of Progress, however, argues in Techdirt that 230 should protect generative AI. She says that because the output depends “entirely upon whatever query or instructions its users may provide, malicious or otherwise,” the users should be the ones left holding the legal bag. But proving that in court would be an uphill battle, she concedes, in part because defendants would have the onerous task of explaining to judges how these technologies actually work.
The picture gets even more complex: Courts will also have to decide whether only the creators of LLMs receive Section 230 protections, or if companies using the tech on their own platforms are also covered, as Washington Post writer Will Oremuspondered on X last week.
In other words, is Meta liable if users post legally problematic AI-generated content on Facebook? Or what about a platform like X, which incorporates the AI tool Grok for its premium users?
Mark Lemley, a Stanford Law School professor, told GZERO that the liability holder depends on the law but that, generally speaking, the liability falls to whoever deploys the technology. “They may in turn have a claim against the company that designed [or] trained the model,” he said, “but a lot will depend on what, if anything, the deploying company does to fine-tune the model after they get it.”
These are all important questions for the courts to decide, but the liability issue for generative AI won’t end with Section 230. The next battle, of course, is copyright law. Even if tech firms are afforded some protections over what their models generate, Section 230 won’t protect them if courts find that generative AI companies are illegally using copyright works.
GOP battle with Big Tech reaches the Supreme Court
Jon Lieber, head of Eurasia Group's coverage of political and policy developments in Washington, discusses Republican states picking fights with social media companies.
Why are all these Republican states picking fights with social media companies?
The Supreme Court this week ruled that a Texas law that banned content moderation by social media companies should not go into effect while the lower courts debated its merits, blocking the latest effort by Republican-led states to try and push back on the power of Big Tech. Florida and Texas are two of the large states that have recently passed laws that would prevent large social media companies from censoring or de-platforming accounts that they think are controversial, which they say is essential for keeping their users safe from abuse and misinformation. The courts did not agree on the constitutionality of this question. One circuit court found that the Florida law probably infringes on the free speech rights of the tech companies.
Yes, companies do have free speech rights under the US constitution while a different circuit court said that the state of Texas did have the ability to dictate how these firms choose how to moderate their platforms. These questions will likely eventually be settled by the Supreme Court who are going to be asked to weigh in on the constitutionality of these laws and if they conflict with the provision of federal law that protects the platforms from liability for content moderation, known as Section 230. But the issue is also likely to escalate once Republicans take control of the House of Representatives next year. These anti-Big Tech laws are part of a broader conservative pushback against American companies who Republicans think have become too left-leaning and way too involved in the political culture wars, most frequently on the side of liberal causes.
And states are taking the lead because of congressional inertia. Democrats are looking at ways to break up the concentrated power of these companies, but lack a path towards a majority for any of the proposals that they've put forward so far this year. Social media, in particular, is in the spotlight because Twitter and Facebook continue to ban the account of former president Donald Trump. And because right-leaning celebrities keep getting de-platformed for what the platforms consider COVID disinformation and lies about the 2020 election.
But recent trends strongly suggest that when Republicans are in charge, they're likely to push federal legislation that will directly challenge the platform's ability to control what Americans see in their social media feeds, a sign that the tech wars have just begun.
Kara Swisher on Trump’s social media ban
What does renowned tech journalist Kara Swisher make of the swift and near-universal social media ban imposed on former President Trump shortly after the January 6 Capitol riots? She supported the move, but she doesn't think these companies should be left off the hook either. "Why are these systems built this way so someone like President Trump can abuse them in such a fashion. Or in fact, not abuse them but use them exactly as they were built." Her conversation with Ian Bremmer is part of the latest episode of GZERO World.
Section 230: The 90's law still governing the internet
The technology of the 1990s looked nothing like today's connected world—and the internet hosted just a fraction of the billions of people who now use it every day. Yet, Section 230 of the Communications Decency Act, passed in 1996, is the law that governs rights and responsibilities of social media companies…that weren't even around when it was written. Ian Bremmer explains on GZERO World.
How to change a social media business model that profits from division
The United States has never been more divided, and it's safe to say that social media's role in our national discourse is a big part of the problem. But renowned tech journalist Kara Swisher doesn't see any easy fix. "I don't know how you fix the architecture of a building that is just purposely dangerous for everybody." Swisher joins Ian Bremmer to talk about how some of the richest companies on Earth, whose business models benefit from discord and division, can be compelled to see their better angels. Their conversation was part of the latest episode of GZERO World.
Kara Swisher on Big Tech’s big problem
Renowned tech journalist Kara Swisher has no doubt that social media companies bear responsibility for the January 6th pro-Trump riots at the Capitol and will likely be complicit in the civil unrest that may continue well into Biden's presidency. It's no surprise, she argues, that the online rage that platforms like Facebook and Twitter intentionally foment translated into real-life violence. But if Silicon Valley's current role in our national discourse is untenable, how can the US government rein it in? That, it turns out, is a bit more complicated. Swisher joins Ian Bremmer on GZERO World.
Podcast: Kara Swisher on Big Tech's Big Problem
Listen: Renowned tech journalist Kara Swisher has no qualms about saying that social media companies bear responsibility for the January 6th pro-Trump riots at the Capitol and will likely be complicit in the civil unrest that may continue well into Biden's presidency. It's no surprise, she argues, that the online rage that platforms like Facebook and Twitter intentionally foment translated into real-life violence. But if Silicon Valley's current role in our national discourse is untenable, how can the US government rein it in? That, it turns out, is a bit more complicated. Swisher joins Ian Bremmer on our podcast.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform to receive new episodes as soon as they're published.How can the Biden administration rein in Big Tech?
Renowned tech journalist Kara Swisher has no qualms about saying that many of the country's social media companies need to be held accountable for their negative role in our current national discourse. Swisher calls for "a less friendly relationship with tech" by the Biden administration, an "internet bill of rights" around privacy, and an investigation into antitrust issues.
Swisher, who hosts the New York Times podcast Sway, joins Ian Bremmer for the latest episode of GZERO World, airing on public television nationwide beginning this Friday, January 22th. Check local listings.