TEC Talks: Misinformation and Disinformation – Section 230: Online Speech and Tech Responsibility

Subscribe to the ThinkND podcast on AppleSpotify, or Google.

Featured Speakers: 

  • Danielle Citron, Professor of Law, University of Virginia School of Law
  • Yael Eisenstat, Formerly CIA, Facebook , NA
  • Mark McKenna ’97, John P. Murphy Foundation Professor of Law; Founding Director, Notre Dame Technology Ethics Center
  • Elizabeth M. Renieris, Professor of the Practice; Founding Director, Notre Dame-IBM Technology Ethics Lab

Danielle Citron, Jefferson Scholars Foundation Schenck Distinguished Professor in Law at the University of Virginia School of Law, opened by discussing an early call she made around Section 230 that would create a duty of care, and said that it very unpopular 12 years ago when she proposed it because everyone believe it would “break the internet.”  Today, we see liberals calling for reform of Section 230 because platforms are doing too little filtering, and on the other side, conservatives arguing Section 230 should be dismantled or reformed because platforms are doing too much filtering of what should be free speech.  Citron said at the core, the problem is we have no comprehensive data privacy and data protection law, so platforms enjoy complete, unabated pursuit of their business goals.  Citron called this a new wild west because as a result, platforms are totally unregulated.

Yaël Eisenstat, a democracy activist and strategist, said she comes at this problem from a different perspective.  She shifted her career in foreign extremism and terrorism into domestic concerns, and this led her to educate herself about Section 230 and related social media phenomenon.  She said it is clear from that study that something is very wrong with our information ecosystem.  People understand that social media is contributing to an environment where bad actors have an easier time in this world, from the ease of sex trafficking to the rapidity of the spread of misinformation.  She also urged people to think about the problems as shared problems in which we all have a stake, rather than try to argue that Section 230 should be reformed instead of other major related issues like data privacy or antitrust issues.  Citron agreed that all three need to be addressed.  She highlighted how much of the early enforcement, especially from states that may have addressed data collection and antitrust concerns with early big tech practices such as at Google, was curtailed by effective lobbying efforts by those companies under honest claims- at the time- that data was not being sold (that came later).

Citron gave an overview of Section 230, explaining that the law itself sourced from a belief that both information should be free and that platforms would voluntarily take on the job of blocking and filtering offensive speech as good Samaritans in the private sector.  To that end, under-filtering and over-filtering provisions are included in Section 230, such that whatever direction a platform goes with content moderation, it is protected under the law.  Eisenstat added anecdotal commentary based on her observations from her time at Facebook, highlighting how Facebook seemed to have tools to fix some of the problematic types of posts, such as obvious disinformation, but often did not employ it.  She argued this is because laws like Section 230 actually de-incentivize platforms from taking on a proactive “good Samaritan” type role, as it would prove that the platforms are technologically capable of doing so and perhaps should be asked to legally take responsibility for the posts.

In terms of how to fix some of the problems Section 230 has created, Citron reminded that much of the original purpose of Section 230 was duty-based.  She suggested bringing that back and conditioning legal immunity “on reasonable content moderation practices in the face of illegality that causes serious harm.”  She urged limiting the language in this specific way because reasonableness is elastic, and what’s effective and appropriate for one type of problematic activity is different from another so that flexibility in practice is necessary.  Citron also called for stronger transparency in content moderation policies, with improved mechanisms for process and appeals.  Without such legal incentives, Citron argued platforms will never have a good reason to take content down as the business model relies on maximizing engagement and data collection that comes along with it.

Eisenstat shared an observation that many of the Twitter accounts that have since been de-platformed were retweeted by users that attended the January 6th attack at the Capital.  She suggested these users reading the tweets are likely vulnerable to extremist messaging, and she is curious what role the platform recommendation engines played in the users gaining interest in such activities.  Eisenstat suggested therefore that not all activities conducted by modern platforms should be considered protected under Section 230 – for instance recommendation engines.  Curating, recommending, and even creating content is not uncommon on the part of platforms today – why should that be covered by Section 230?  These are not passive activities, Eisenstat argued.  Further, immunity under Section 230 completely prevents this type of discovery regarding how people are radicalized and whether the recommendations from platforms are playing a role in that process.

Visit the event page for more.


  • There is no comprehensive data privacy and protection regulation in the United States, which makes the internet “a new wild west.”
  • Section 230 is grounded in the idea that information should be free, and that platforms would effectively block and filter offensive speech as “good samaritans” in the private sector.
  • However, Section 230 actually disincentives platforms from blocking and filtering offensive content, because that would demonstrate that they are capable of and responsible for doing so.
  • Without legal incentives, the platforms will remain disincentivized to remove offensive content, because that type of content maximizes engagement and therefore data collection.
  • Perhaps not all content should be considered protected by Section 230 – for instance, algorithmically curating and recommending content that may radicalize users vulnerable to extremist messaging.

  • “We have liberals, on the one hand, calling for Section 230 reform on the grounds that the platforms are doing too little filtering, and on the other side of the aisle, conservatives suggesting that Section 230 needs to be reformed or dismantled because the platforms are doing too much filtering, and in particular, filtering their speech“  (Danielle Citron, 4:25)
  • “[At Facebook,] we didn’t want the public to think that we actually could clean up certain parts of the platform. And so you’ll hear a lot about how it’s too hard or there’s not enough technical solutions, or all of these things. I’ll tell you that some of the things that we proposed in our teams were 100% technically feasible, and [Facebook] still wouldn’t do it.”  (Yael Eisenstat, 21:42)
  • “Rather than completely reforming Section 230 except for some of the really smart ideas about duty of care and whatnot, why are we not thinking also about what applies and what doesn’t? And what categories apply to Section 230?”   (Yael Eisenstat, 35:54)
  • “We have a big problem in our legal imagination for our understandings of privacy harms, and often they’re so cramped that we only recognize privacy harms that relate to money damages, so financial harms, so you’re a victim of identity theft and you have charges now on your credit card and there are all sorts of real, tangible, quantifiable harms that relate to economic costs. Courts will say, ‘that’s a privacy harm!’ when, in fact, that’s not what Warren and Brandeis were thinking about at all. We often don’t think about financial harm as tantamount to the emotional harm.”    (Danielle Citron, 46:30)

Technology Ethics CenterUniversity of Notre Dame