Perspectives

Q&A: Section 230 is at the Supreme Court. Here’s Why that Matters for Free Expression

Four legal experts weigh in on two cases at the United States Supreme Court that could alter how the internet functions, how it is governed, and how users engage with it.

The Decline of Internet Freedom

To date, the law commonly known as Section 230 of the US Communications Decency Act (CDA) is the strongest protection globally for free speech online. It shields websites, social media platforms, and other sites hosting content from legal liability for most material created by users, with exceptions under federal criminal law, intellectual property law, laws to combat sex trafficking, and laws protecting the privacy of electronic communications. Such broad protections against liability for user-generated content have allowed online expression to flourish, dissuading companies from censoring speech for which they could be sued. Section 230 also protects companies’ ability to set their own community standards—for instance, by banning nudity or racist speech—and moderate content accordingly.

In recent years, Section 230 has come under increasing fire due to concerns about user safety, disinformation, harassment, and content moderation. In the interview below, Freedom House’s Allie Funk speaks with four legal experts about the Supreme Court’s upcoming decisions in Gonzalez v. Google and Twitter Inc. v. Taamneh. The cases, which were brought by family members of individuals killed in terrorist attacks, focus on liability in relation to ISIS content across YouTube and Twitter, respectively. Gonzalez v. Google specifically questions whether protections under Section 230 should be limited when platforms’ algorithms recommend content, while Twitter Inc. v. Taamneh addresses whether platforms violate the Anti-Terrorism Act and aid and abet terrorism if such content is not removed aggressively enough. Decisions are expected to be handed down this summer.

Allie Funk: What were your impressions of February’s oral arguments in Gonzalez v. Google, Twitter Inc. v. Taamneh, or both? Were you surprised by anything the justices said?

Daphne Keller (Director of the Program on Platform Regulation at Stanford's Cyber Policy Center)In both cases, I was impressed by how well-informed, thoughtful, and apolitical the justices’ questions were. I think the avalanche of amicus briefs probably contributed to that, giving the justices a sense of how momentous a decision changing the rules for platforms and online speech could be, and the importance of caution in an area where seemingly small legal changes can have large unintended consequences. I did wish the advocates on all sides had been more specialized in this area of law. Plaintiffs’ counsel, in particular, dropped the ball on some questions that I think Mary Anne and others in the room could have answered well. Getting those arguments out more clearly is important, whether or not I agree with them all.

Anupam Chander (Scott K. Ginsburg Professor of Law and Technology at Georgetown University Law Center): As Daphne says, the justices asked difficult questions of both sides in Gonzalez and demonstrated that they were grappling with the difficult issues the case presents. Justice Elena Kagan had the best line of the argument: “These are not, like, the nine greatest experts on the internet.” Justice Clarence Thomas proved, to me, the biggest surprise. Despite having called for a narrowing of the interpretation of Section 230, he led off the questioning with a challenge to plaintiff’s attorney, one to which the attorney did not adequately respond. He asked: if the algorithm was in fact neutral between cooking videos and ISIS videos, should its maker be liable for teeing up the latter category of videos? Plaintiff’s attorney sought to turn from neutral algorithms to the actions of YouTube in producing thumbnails, which he believed took the company out of the cover of Section 230. Justice Thomas may see the possibility of liability here as depending on whether the algorithm was designed to promote terrorism.

Mary Anne Franks (Professor of Law and Michael R. Klein Distinguished Scholar Chair at University of Miami School of Law, and President and Legislative & Tech Policy Director at the Cyber Civil Rights Initiative): Like others, I was generally impressed with the sophistication of the justices’ questioning in both cases. I thought Justice Ketanji Brown Jackson’s questions in Gonzalez were particularly astute, as they demonstrated a deep understanding of how the text and legislative history make clear that Section 230 as a whole is intended to be a Good Samaritan law for the internet—that is, its primary goal is to incentivize voluntary, good faith interventions against harm. The way that courts have overwhelmingly interpreted Section 230, by contrast, erases the incentive to prevent harm and provides an incentive to cause harm—it allows tech companies to act as recklessly as they want in designing their products and services, because more harmful, provocative content equals more profit. This kind of unqualified immunity shuts the courtroom door on individuals who have experienced grievous injury, displaces massive amounts of settled law, and prevents the public from learning the details of what tech companies are doing to accelerate and profit from harm.

Artur Pericles Lima Monteiro (Associate Research Scholar and Wikimedia Fellow at Yale Law School): I also had a positive impression of the oral arguments. The justices seemed to have nuanced concerns—even those who had previously been vocal against platforms or were sympathetic to the plaintiffs. It sounded as if they appreciated that there is no single tweak that will do the trick.

Allie: How would internet users in the US be impacted if the Supreme Court rolled back protections under Section 230? Any thoughts on the effect globally if it was changed?

Daphne: The bottom line is that without Section 230 protection, platforms would have strong incentives to do one of two things. First, they could stop trying to moderate content altogether to ensure they won’t be treated as editors of publishers of user speech. That would make every platform look something like 4chan: full of hate speech, scams, and other garbage. Second, platforms could moderate very aggressively, taking down anything with even a whiff of legal risk to protect themselves. In that world, we wouldn’t see things that threaten powerful, well-lawyered people—like #MeToo accusations, critical journalism, or whistleblower claims—on major platforms. 

Anupam: Take the facts of Gonzalez and Taamneh, which both arise from the horrors experienced by the victims of terrorism. In both cases, companies are accused of abetting terrorism by allowing the circulation of material on their sites. But, of course, these are American companies, run by people who abhor terrorism, such as the terror inflicted on the plaintiffs in these cases. The companies have policies that seek to banish content promoting terrorism, though it continues to slip through their filters. Now, imagine if they are liable for such content if they receive a notice claiming that some piece of content promotes terrorist content. What if they then look at the content and see that it is making accusations against US military activities in the Middle East—which might cause someone to respond, perhaps violently. Their self-interest will lead them to remove such content, lest they be held liable for knowingly promoting terrorism—now, knowingly, because they received a notice. And this scenario will apply not just to speech that could conceivably lead to terrorism, but also all kinds of speech that could potentially lead to harm, like challenging officially approved medical practices or alleging corruption, sexual or other physical assault, or racism. The speech of those with the least means to defend themselves online (and the least access to conventional media) would be most at risk.

Looking internationally, Section 230 was part of the inspiration for what would become the European Union’s E-Commerce Directive, though that directive was not as protective as Section 230. I trace its global impact in a paper, “Section 230 and the International Law of Facebook,” in the Yale Journal of Law & Technology.

Art: Anupam's paper is the best reference here. I would highlight that, as he explores in that paper, the US promoted Section 230 abroad, including by embedding it in its trade agreements with Japan and with Canada and Mexico. This doesn't mean that it served as some sort of model law everywhere; Brazil, for instance, was once quite proud of the balance struck by its Marco Civil da Internet, which doesn't immunize providers against taking down content, but grants them safe harbor for user-generated content, for which providers have no liability unless they fail to comply with a court order specifying what content should be restricted.

Section 230 has more recently become a target for those calling for platform responsibility, to whom the provision stands as an obsolete, laissez-faire approach that lets companies off the hook while society pays the price. Intermediary liability immunity models could come down rapidly if Section 230 gets weakened, or if the opinion of the court voices disapproval even while maintaining the status quo. In Brazil, just the fact that the US Supreme Court had agreed to consider these two cases was enough for members of the Brazilian Supreme Court—which has recently held public hearings on a constitutional challenge to Marco Civil—to suggest that "even the United States" deemed that intermediary liability immunity should be cut down.

Mary Anne: A threshold question here is who and what is being protected by the Section 230 status quo. Tech companies are certainly reaping the benefits, as are individuals who enjoy the freedom to harass, threaten, and organize violence against groups they do not like with no consequences. This status quo isn’t particularly protective of those targeted by intimidation and exploitation, who are overwhelmingly women, racial minorities, and sexual minorities. So while it is likely that the most privileged beneficiaries of the current system might find themselves in a somewhat less advantageous position if the system were changed, Section 230 reform done correctly has the potential to allow other individuals to escape from life-destroying abuse that currently silences their speech, drives them out of their careers, and inhibits their participation in society.

Allie: President Biden and many in Congress have called for or proposed legislation reforming the law. Do you think the law should be reformed? If so, what changes would you make?

Daphne: I think there are a lot of obvious knobs and dials that lawmakers can use to change laws like Section 230, and there are a lot of potentially useful lessons from models like the new Digital Services Act (DSA) in the European Union (EU). But Congress doesn’t seem interested in applying those lessons. Instead, we keep getting reckless bills like the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act—which tackles the immensely important problem of online child abuse material, but does so in ways that fundamentally undermine privacy and security online for everyone. 

Art: I think the support Section 230 once had is at a breaking point, as even those who back it will admit. Other intermediary immunity regimes globally are at similar junctures. In Brazil, that shift has been rapid: globally celebrated when it was enacted in 2014, Marco Civil da Internet has been under considerable pressure for reform since 2020. My current research agenda is centered on how Section 230 and regulation generally could be structured to empower users and communities.

Anupam: I think most of the proposals that I have seen would likely harm the ability of marginalized groups to speak, especially when they make claims of unjust treatment. I think Section 230 should be revised to make clear that it doesn’t protect against injunctions.

Mary Anne: If the Supreme Court fails to scale back the excessively broad interpretation of Section 230 that has taken hold in the courts, Congress should take up the responsibility of amending Section 230 to clarify its purpose and foreclose interpretations that render the statute incoherent. At a minimum, this requires two specific changes: amending the statute to make clear that interactive computer service providers that demonstrate deliberate indifference to harmful content are ineligible for immunity; and making clear that the law’s protections apply only to speech. To accomplish the first change, Section 230 (c)(1) should be amended to state that providers or users of interactive computer services cannot be treated as the publisher or speaker of speech wholly provided by another information content provider, unless such provider or user intentionally encourages, solicits, or generates revenue from the speech, or exhibits deliberate indifference to harm caused by that speech. To accomplish the second change, the word “information” in Section 230 (c)(1) should be replaced with the word “speech,” to put all parties in a Section 230 case on notice that the classification of the content at issue as protected speech cannot be assumed, but instead must be demonstrated.

Allie: Are there other measures that you would recommend Congress take to address challenges with content moderation or broader concerns such as disinformation and harassment?

Anupam: The most important immediate step would be to pass the American Data Privacy and Protection Act, which will limit the gathering and use of personal data. Congress should review and help fund rigorous studies of the impact of social media on young people. It should then consider a targeted legislative response based on these studies. Congress should watch the implementation of the EU’s DSA and see if it results in improvements that the US could borrow.

Daphne: We won’t make smart laws unless we understand better what platforms are actually doing, and how they shape online discourse. The way to do that is through better transparency measures. There’s tricky First Amendment territory here—I have real worries that some existing transparency legislation actually will allow state attorneys general like Ken Paxton in Texas to strongarm platforms and change their policies for user speech. That’s terrifying if you care about things like LGBT+ rights or immigrants’ rights. But there’s also really low-hanging fruit, like federal legislation protecting the rights of researchers, like those at New York University’s Cybersecurity for Democracy Project, to “scrape” data from platforms, and protecting them from platform lawsuits under archaic laws like the Computer Fraud and Abuse Act. 

Mary Anne: Congress should enact narrowly targeted federal criminal legislation to address new and highly destructive forms of technology-facilitated abuse, especially those disproportionately targeted at vulnerable groups, including nonconsensual pornography, sexual extortion, doxing, and digital forgeries (“deep fakes”). As Section 230 immunity does not apply to violations of federal criminal law, the creation of these laws (such as the Stopping Harmful Image Exploitation and Limiting Distribution [SHIELD] Act) will ensure that victims of these abuses will have a path to justice with or without Section 230 reform. In addition, Congress should make deliberate efforts to reduce public reliance on corporate-owned social media, including passing robust consumer protection laws and regulations as well as investing in public universities, traditional media, libraries, and other non–social media avenues for the public to obtain valuable information and to exchange ideas.

Art: Approaches such as that of the EU’s DSA seem to be gaining ground—Brazil's latest version of the "fake news bill" (now much broader in scope than only aiming to tackle mis- or disinformation) finds inspiration in it, for instance. I am concerned that this kind of regulation is too reliant on platforms as the architects of a new edifice for the digital public sphere. I wish new regulation would make platform governance more participatory, not less.

Note: This interview has been lightly edited for clarity and concision. Views expressed by the interviewees may not reflect Freedom House's official position.