Skip to Main Content

Misinformation on Social Media

This guide provides an overview of the problem of misinformation on social media. It includes tools for evaluating information, as well as lesson plans, resources, and activities for instructors to teach students how to evaluate information and spot misin

Addressing Misinformation Online

There are 3 main strategies for dealing with misinformation online and on social media specifically:

Prebunking / Inoculation

Rather than trying to correct misinformation or prevent people from posting it in the first place, "prebunking" attempts to inoculate users against misinformation before they encounter it (Goldberg, 2021).

Teachers and platforms can build up an immune response against misinformation in students and users by (Goldberg, 2021):

  1. Introducing an emotional warning about misinformation
  2. Equipping users to spot and refute a claim
  3. Providing a weak example of misinformation that students can identify and refute.

This approach has advantages over countering specific claims after the fact because it is broader and transferrable to other claims (Goldberg, 2021). Additionally, prebunking messages can be apolitical because they do not have to take positions on issues about which people may already have strong opinions, which reduces the risk of triggering defensive motivated cognition (Goldberg, 2021; Roozenbeek et al., 2022, p. 2).

Debunking / Fact-Checking

Debunking, or fact-checking, refers to examining information after the fact to determine its truthfulness, then disseminating the truth to counter false narratives. This is the work that fact-checking organizations like the ones below do to evaluate claims and support or disprove them.

While this work is important for examining specific claims, and these sites are a valuable resource for research, countering misinformation after it has already been released is difficult and potentially counterproductive for the following reasons (Roozenbeek et al., 2022, p. 1):

  • It is hard to establish what counts as facts epistemologically, especially with politics.
  • Fact-checks are unlikely to reach everyone who was exposed to the initial misinformation.
  • Getting people to believe fact-checks is challenging.
  • Fact-checking efforts for individual instances of misinformation are hard to scale.
  • It is difficult to test the effectiveness of fact-checking in the real world
  • The “continued influence effect” means that people tend to remember their first impressions upon encountering misinformation, even after learning that misinformation was false. 


There are many definitions of censorship depending on context, but broadly, censorship refers to “the suppression of words, images, or ideas that are ‘offensive,’” and it may be carried out by the government or private groups or individuals (American Civil Liberties Union, n.d.). 

Government Restrictions of Misinformation

Censorship by the United States government violates the First Amendment, except in limited circumstances such as fraud, defamation, campaign speech, and false speech by the “broadcast media;” however, these restrictions vary based on state and federal statutes and court decisions (Brannon, 2022).

For example, in cases of defamation, misinformation is constitutionally-protected speech unless a plaintiff can prove “an allegedly defamatory statement “was made with ‘actual malice’—that is, with knowledge that it was false or with reckless disregard of whether it was false or not” (Brannon, 2022).

Social Media Restrictions of Misinformation

Social media and other internet-based platforms are generally not liable for the content on their platforms under Section 230 of the Communications Decency Act, which means they do not have to ensure content is not “false, hurtful, or even dangerous” (Oremus, 2022).

Because the legal limits on speech are narrow, social media platforms must decide how to moderate speech and suppress misinformation (or not) (Oremus, 2022).

This has led to inconsistent standards for flagging or removing content and banning accounts that disseminate misinformation, which has in turn led some states to pass legislation restricting how social media companies can moderate speech on their platforms (Oremus, 2022).