Exploring the Concept of ‘Misinformation as a Harm’

March 28, 2024

Thinking about the potential harmfulness of misinformation and rumor, we’re reminded of the ways that we’re bound together.

As part of the ARTT project, we think about how different people analyze information, and misinformation, on the internet. In particular, fact-checkers regularly come to mind, since having accurate information is a critical part of helping to inform others, by co-verifying information, correction, or encouraging health inquiry. Some of our team members also knew that fact-checkers can be overwhelmed with the amount of information there is to potentially fact-check (consider the more than 13,000 fact-check articles that these four organizations alone published between May 2019 and August 2022).

So, Amy Zhang and I, along with some others associated with her Social Futures Lab, decided to learn more about the efforts that fact-checkers take to prioritize what claims and issues they focus on. What we tested, and learned more about, was the concept of “misinformation as a harm.”

Using the FABLE Framework to triage potential harmfulness

The question of whether false information is harmful is a very old one – consider the supposedly harmless “white lie.” While people have debated this question for millennia, we do know that what counts as harmful can be very context dependent. And whether a piece of information has the potential to harm someone must, at least in a democratic society, balance with other rights such as the freedom of speech, conscience, and assembly.

As it turns out, helping people have access to correct information given the potential for harm is an especially strong motivator for fact-checkers. And so, to help them develop a way to help sort the kinds of claims to fact-check, a time-consuming process, we developed a model that might help them triage according to potential harmfulness.

To get a little into the nitty gritty of this, our framework tries to clarify dimensions or categories of when a false rumor or information is increasingly harmful, rather than identifying types of harm (like physical versus social harms).

The FABLE framework has five dimensions:

  1. (Social) Fragmentation: The tendency towards social fragmentation within the content’s narrative.
  2. Actionability: The potential of action resulting from the content.
  3. Likelihood of Spread: The likelihood of the content’s spread and exposure.
  4. Exploitativeness: The exploitativeness of the content’s intended audience.
  5. Believability: The believability of the content’s information to the audience.
Dimensions of the FABLE Framework of Misinformation Harms
Dimensions of the FABLE Framework of Misinformation Harms

You can find more about the work to develop and refine a misinfoharms framework in this paper: Misinformation as a harm: structured approaches for fact-checking prioritization (arXiv).

We’re hoping that others can continue to build on this framework and research, and hopefully develop better tools, even automated ones, that can assist fact-checkers to help us navigate the tidal waves of information out there.

Another key takeaway: We are connected, and we all depend on each other.

There are more general takeaways from this work as well. The first is how much we are connected and depend on one another. We learned a lot about the concerns that fact-checkers have for their readers. We also could see our connections to one another through the implications of a piece of, say, health misinformation around an issue that is difficult to understand; complex rumors fall under the Exploitativeness dimension with its potential to harm individuals, communities, societies.

And more tools and resources (like ones that help me to reflect on my own emotional responses, check out the Believability-related questions) can also enable us to make independent and informed choices over our individual lives.

At the same time, another powerful lesson to take away is how much we are each also capable of. Individually, we can wield considerable power, such as affecting others, and just because something has the potential to harm us doesn’t mean that it always or inevitably will.

Help the ARTT team reach more people!

If you like this newsletter, help us reach more people by sharing it with a colleague or a friend who might be interested in discussing how to create opportunities for trusted conversations online. You can also share this link to subscribe to our newsletter.