Service Alert
There are 3 main strategies for dealing with misinformation online and on social media specifically:
Rather than trying to correct misinformation or prevent people from posting it in the first place, "prebunking" attempts to inoculate users against misinformation before they encounter it (Goldberg, 2021).
Teachers and platforms can build up an immune response against misinformation in students and users by (Goldberg, 2021):
This approach has advantages over countering specific claims after the fact because it is broader and transferrable to other claims (Goldberg, 2021). Additionally, prebunking messages can be apolitical because they do not have to take positions on issues about which people may already have strong opinions, which reduces the risk of triggering defensive motivated cognition (Goldberg, 2021; Roozenbeek et al., 2022, p. 2).
Debunking, or fact-checking, refers to examining information after the fact to determine its truthfulness, then disseminating the truth to counter false narratives. This is the work that fact-checking organizations like the ones below do to evaluate claims and support or disprove them.
While this work is important for examining specific claims, and these sites are a valuable resource for research, countering misinformation after it has already been released is difficult and potentially counterproductive for the following reasons (Roozenbeek et al., 2022, p. 1):
There are many definitions of censorship depending on context, but broadly, censorship refers to “the suppression of words, images, or ideas that are ‘offensive,’” and it may be carried out by the government or private groups or individuals (American Civil Liberties Union, n.d.).
Censorship by the United States government violates the First Amendment, except in limited circumstances such as fraud, defamation, campaign speech, and false speech by the “broadcast media;” however, these restrictions vary based on state and federal statutes and court decisions (Brannon, 2022).
For example, in cases of defamation, misinformation is constitutionally-protected speech unless a plaintiff can prove “an allegedly defamatory statement “was made with ‘actual malice’—that is, with knowledge that it was false or with reckless disregard of whether it was false or not” (Brannon, 2022).
Social media and other internet-based platforms are generally not liable for the content on their platforms under Section 230 of the Communications Decency Act, which means they do not have to ensure content is not “false, hurtful, or even dangerous” (Oremus, 2022).
Because the legal limits on speech are narrow, social media platforms must decide how to moderate speech and suppress misinformation (or not) (Oremus, 2022).
This has led to inconsistent standards for flagging or removing content and banning accounts that disseminate misinformation, which has in turn led some states to pass legislation restricting how social media companies can moderate speech on their platforms (Oremus, 2022).