“Urgent: A Structured Response to Misinformation as Harm”
Working Paper, September 2022
Online misinformation is a major challenge for societies today. Beliefs in false claims about science, such as vaccine misinformation, can lead people to engage in harmful behavior that risks their own health. Such misinformed beliefs can also defeat public health measures that rely on collective compliance to protect society’s most vulnerable. Similarly, a belief in inaccurate or misleading narratives about topics such as vote-rigging or other supposed election interference can lower the public’s trust in democratic institutions, and in turn affect the level of participation in political activities such as voting, interfere with the peaceful transition of power, and even motivate political violence.
Fact-checking is a critical activity when addressing misinformation. Fact-checking supports individual readers who seek good information, and also supports content moderation initiatives on larger scale platforms. However, fact-checking is laborious. The fact-checking process includes investigating claims, collecting convincing evidence that such claims are false or misleading, and then sharing that evidence. With torrential volumes of user-generated content being created daily, it is impossible to fact-check every new article, post, message, or claim.
As a result, fact-checkers tasked with addressing online misinformation must prioritize what they choose to tackle every day. Given that prioritization is unavoidable, how should fact-checking efforts to combat misinformation prioritize what content to tackle? A working group of academics, non-governmental organizational researchers, and students based in the Social Futures Lab at the University of Washington’s Allen School of Computer Science and Engineering decided to explore this question.
From interviews that the writers of this paper conducted with fact-checkers, we found that fact-checking processes are still young and not standardized as a field. Fact-checkers typically take a relatively ad hoc approach to prioritization, using individual judgment and case-by-case discussion with others. Could prioritization instead be achieved in a principled and systematic way? One way forward that we propose is via harm assessment.
In applying a structured harm assessment to misinformation, we begin by making the observation that while all misinformation is harmful to some degree, not all misinformation is equally harmful. Following a literature review, and a series of interviews and workshops with fact-checkers and other misinformation experts, we identified major dimensions for assessment.
Five dimensions — actionability, exploitativeness, likelihood of spread, believability, and social fragmentation — can help determine the potential urgency of a specific message or post when considering misinformation as harm. In addition, we conclude this paper by providing a checklist of questions to help determine a piece of content’s relative level of urgency within each dimension.
The dimensions and the questionnaire are intended as both conceptual and practical tools to support fact-checkers, content moderators, peer correction efforts, and other initiatives as they make strategic decisions when prioritizing their efforts to respond to misinformation that is spreading.
A detailed version of the questionnaire alone can be found here, and feedback regarding the framework and questionnaire are welcome for submission through the Google form, below:Download the full paper hereView the online Misinformation Harm QuestionnaireSubmit feedback
Methodology and Acknowledgments
The development of this framework was informed by existing research in the fields of misinformation, cyber-harms and hate speech, as well as by semi-structured interviews with professional fact-checkers. It is a joint effort between the ARTT project team and research partners tied to the UW Social Futures Lab.