A Brief Inquiry Into the Care and Feeding of Online Responders

December 4, 2023

What happens to people when they respond to misinformation on social media?

Recently, a group of us from ARTT, including a number of other folks from the University of Washington and the University of Michigan, published a study to describe what happens, as well as what people need to be able to continue engaging with other users on difficult and emotionally charged topics.

In our study, we started out with the proposition that engagement in online conversations can be more effective when performed by individuals rather than by platforms. We set out to answer two questions:

  1. What barriers or challenges do people face in identifying and responding to misinformation posted on social media?

  2. What design principles should inform tools or resources (such as the ARTT Guide!) that support people in identifying and responding to misinformation posted on social media?

To find out, our team conducted semi-structured interviews with 29 people, including misinformation research experts, online community moderators, and people who respond to misinformation on social media as part of their work.

Here’s what we learned:

Responding to misinformation takes time and effort. Bottom-up user-led responses require a decent amount of time for gathering facts, and real effort is required for creating an effective response.

“It’s all about finding incredible sources, incredible analysis, but also verifying using different sources,” said one interviewee.

Responding in an environment filled with conflict and relational strife has a significant emotional cost. It can be hard to engage in the misinformation space on a daily basis, and users sometimes feel as if they don’t have enough support to continue. The emotional labor involved in responding can be a significant hurdle.

As one user reported, “On the public-facing Twitter account, I have to ignore a fair amount of it because if I tried to engage everything like that that was out there, it would be an intolerable burden.”

So what does that tell us?

To help lessen the burden of the labor involved in engaging in difficult online communications, the people we studied said they needed tools that could both minimize time and effort in identifying misinformation and at the same time provide tailored suggestions for effective individualized responses, such as tips and guidance on how to craft responses to misinformation based on the nature of one’s relationship with the misinformation sharer.

This research has been one of the animating principles of the ARTT project. Study participants told us they need better tools, and more support, in engaging in productive conversations, online and off.

At the same time, as we build our generative AI product, we are mindful of what users here and in other venues have also told us, which is that while they would like a product that provides assistance, they are leery of a tool in which the AI does all the work.

This suggests to us that the next version of the ARTT Guide needs to be a tool much like spellcheck, that makes suggestions and gives guidance but leaves the final product up to the user.

Take our survey and share your thoughts!

For more findings and insights, read our full study, which appears in the Harvard Kennedy School Misinformation Review: User experiences and needs when responding to misinformation on social media.

As well, we still need more thoughts on this. If you have some, please consider taking our survey, or just email us with ideas and questions!

Although we new folks bring a vast range of experience to the project, we all have one thing in common: We want to better understand and help ARTT users like you have better trust-building conversations and help you answer the question: “What can I say and how do I say it?”

This work is especially important as we continue through this phase of the ARTT project and focus on finding solutions for our core audience. Is there anything you think they should consider?

Subscribe to the ARTT Newsletter!

If you enjoyed this article, subscribe to our newsletter!