Skip to Main Content

Misinformation on Social Media

This guide provides an overview of the problem of misinformation on social media. It includes tools for evaluating information, as well as lesson plans, resources, and activities for instructors to teach students how to evaluate information and spot misin

Algorithms on Social Media

Social media algorithms refer to a variety of processes that rank user-provided signals to determine what type of content and advertisements to display to users (Mosseri, 2021). These signals include information about:

  • Posts
    • Likes, comments, shares, metadata (time/location/length), etc.
  • User preferences
    • Likes, comments, shares, metadata, time spent on posts, etc.
  • Other users
    • What do users who have provided similar signals interact with?

Algorithms on various platforms rank this information to attempt to serve users content and advertisements that are more likely to keep the user engaged with the ads or the content on the platform itself.

Mass Personalization

Mass personalization refers to psychological targeting that is (Hermann, 2022, p. 5):

  • Powered by artificial intelligence
  • Designed to appeal to psychological traits
    • Gleaned from data harvested from users
  • Able to account for variability over time based on digital footprints
    • Can infer user moods to further tailor the content to provide to users

Implications of Social Media Algorithms

Power of Social Media Algorithms

Social media platforms are incredibly proficient at holding users' attention with hyper-personalized content.

A single “like” allows actors to infer psychological traits that enable them to send targeted messages that significantly affect user behavior (Matz et al. 12717)​.

Tech companies can enable advertisers to send microtargeted messages based on algorithmically determined personality profiles (Kozyreva et al., 2020, p. 118)

Algorithms Contribute to the Spread of Misinformation and Polarizing Content

YouTube recommendations have been shown to lead people to more extremist content (Kozyreva et al., 2020, p. 115). 

In 2018, in an effort to address declining engagement on the platform, Facebook reconfigured its recommendation algorithms, which led outrageous and sensationalized content to go viral at higher rates (Hagey & Horwitz, 2021). Internal documents stated, "Misinformation, toxicity, and violent content are inordinately prevalent among reshares" (Hagey & Horwitz, 2021).

Researchers have shown that viewing 20 widely-shared videos sowing doubt about election systems will retrain TikTok’s algorithm so that it will push more “election disinformation, polarizing content, far-right extremism, QAnon conspiracy theories and false Covid-19 narratives” even when using neutral search terms (Hsu, 2022).

Filter Bubbles & Echo Chambers

The terms "filter bubbles" and "echo chambers" are often used interchangeably, and they generally refer to (Kozyreva et al., 2020, p. 118):

  • Filter bubbles: content selected by algorithms according to ranked user-provided signals
  • Echo chambers: information environments where users are only exposed to information from like-minded sources

While algorithms do not necessarily create these situations on social media, algorithms do reinforce user choices, which tends to lead users to more extreme, polarizing content, which is more likely to be misinformation (Hameleers et al., 2022; Kozyreva et al., 2020)..

However, though some studies show that algorithmic recommendations only push a minority of users to hyper-extremist content, the sheer number of people who use these platforms means that millions of people are led toward extremist content through algorithmic recommendations (Brown et al., 2022).