Frequently Asked Questions
Last Updated: August 21, 2024
What is ARTT?
The Analysis and Response Toolkit for Trust (ARTT) project provides practical tools that support trust-building conversations online.

The project brings together insights from research fields such as computer science, social science, media literacy, conflict resolution, and psychology, in addition to practitioners from communities focusing on health-related communications in journalism, vaccine safety, and Wikipedia.

Our main tool, the ARTT Guide, is a Web-based software assistant that provides a framework of possible responses for everyday conversations around complicated topics. By bringing together insights about how to engage in positive exchanges around credible information, and by providing guidance, encouragement, and inspiration, the ARTT Guide will help our users answer the question: “What can I say and how do I say it?”

For example, we noticed that one potentially contentious exchange around vaccines in a Reddit subreddit ended respectfully after the speakers engaged in listening, empathizing with different audiences, and taking the perspective of other people. (Wouldn’t it be nice if all conversations went like this?)

We aim to combine current expert guidance and encouragement through a software assistant so that people can combine ARTT’s insights with their own local expertise and experience as they strive to have better online conversations.
What is the goal of ARTT?
ARTT's purpose is to support public conversations that build trust. In the middle of exchanges that are about inaccurate or difficult-to-understand information, how you might respond depends not only on your understanding of the information, but the goal of your conversation as well. In online communications, the nature of social media platforms can make meaningful discussions even harder.

ARTT wants to help the people doing this challenging work. On top of the intellectual difficulty of trying to find the right information or trying to figure out how to respond, it is also emotionally and psychologically draining to participate in tricky online conversations. People who are motivated to respond to these situations — amateur volunteers, members from online communities like Wikipedia, health communicators, journalists, and librarians — are deeply fatigued and need help.

ARTT offers practical responses to improve online conversations. What solutions do experts have to offer? And can lessons from academics and experienced practitioners be boiled down for easier use? On the other hand, what ways are everyday people already proving successful at trust-building/worthy online conversations? Distilling these possibilities for those navigating conversations is what the Analysis and Response Toolkit for Trust is trying to do.

ARTT provides inspiration. These motivated community members, experts and amateurs alike, are modeling the kinds of conversations that are needed in a democracy: Open, respectful exchanges about factual information on issues that touch our day-to-day lives. So, we also collect and provide examples of these public exchanges to remind ourselves of what is possible and to help imagine what better online exchange can look like.  
What makes ARTT’s approach unique?
Instead of focusing on winning arguments and shutting people down, the goal of the ARTT project is to help people build bridges and engage in productive online conversations. We aim to bring expert guidance and encouragement through a software tool so that people can combine it with their own local expertise and experience as they strive to have better online conversations.
Is ARTT specifically focused on questions around vaccines?
Yes. Although many principles are broadly applicable, the ARTT Guide is focusing first on providing resources for discussions around vaccines.
Who will be using ARTT?
We hope that ARTT will be broadly useful! Most immediately, the ARTT project supports health communicators, educators, and other “superhero” responders who work to keep local, online communities more informed.
What are the components of ARTT?
The focus of the project is the ARTT Guide, a Web-based software tool that will provide insights into points of analysis and response during online conversations around complicated topics.

The ARTT toolkit includes relevant tips or guidance related to different types of response. Our aim is to empower our users in conversations with options for response, including correction, listening, empathizing, perhaps encouraging healthy inquiry.

The toolkit also includes the ARTT Catalog, a curated library of studies and reports from across research disciplines. The Catalog presents the latest findings from these disciplines about how to best engage in conversations around misinformation or other contentious topics in online spaces in trust-building ways.

We are also working on other resources, including a curriculum promoting information analysis and conversation response.
What are responses, and why are they important?
Our aim is to empower users with options for response when they are in conversations in which they feel stuck. Research shows there are a number of possibilities to consider, such as listening, empathizing, encouraging healthy inquiry – or perhaps not responding at all.
What sources are you using for ARTT’s responses? What is your methodology based on?
ARTT’s guided responses are sourced from the latest research in psychology, conflict resolution, media literacy, and other fields. While our tool will offer suggestions on how to best correct information, it will also give users guidance on other response possibilities, including: Co-verify, de-escalate, empathize, encourage healthy inquiry, invite sociability, listen, share, or take perspective.
Who is building ARTT?
Hacks/Hackers and the Paul G. Allen School of Computer Science & Engineering at the University of Washington are ARTT’s lead organizations. During Phase II, which commenced in October 2022, partner and collaborating organizations include Wikimedia DC, Social Science Research Council (SSRC), Children’s Hospital of Philadelphia, National Public Health Information Coalition, and others. Throughout Phase I, a variety of organizations including Wikimedia DC, MuckRock Foundation, and Social Science Research Council collaborated and partnered in the project.

Our team includes researchers, journalists, computer scientists, educators, democracy and conflict resolution specialists, Wikipedians, health science communicators, and others working on information reliability.
Who is on your team?
Connie Moon Sehat (Hacks/Hackers Researcher at Large) is the Principal Investigator (PI) for the ARTT project. Amy X. Zhang (Assistant Professor at University of Washington’s Allen School) and Franziska Roesner (Associate Professor, Allen School), serve as co-Principal Investigators (co-PIs). Kate Starbird (Associate Professor in the UW Department of Human Centered Design & Engineering, and Director of the UW Center for an Informed Public), Tim Althoff (Assistant Professor in the Paul G. Allen School of Computer Science & Engineering), and Tanu Mitra (Assistant Professor, UW Information School) all participate as senior personnel on the project.
How is ARTT funded?
In September 2021, ARTT and its partner organizations received Phase I funding from the National Science Foundation through its Convergence Accelerator. The ARTT project received a $5 million dollar Phase II follow-up award in September 2022.

In 2024, the ARTT project received a grant from the Bipartisan Policy Center to pilot its ARTT-LEO curriculum for election officials in North Carolina.
Does the ARTT Guide include Artificial Intelligence (AI)?
Currently, we are exploring the potential to use AI in ways that help provide users with language and writing options in crafting their responses. The options will not be automatic – however, AI-generated responses must first be selected, verified and edited by users, who ultimately decide what they want to say.
Does ARTT follow best practices for the use of AI in public health communications?
Artificial Intelligence (AI) developments, especially generative AI, offer better, more efficient, and even more creative ways of working. However, there are challenges for public health communicators seeking to appropriately use AI-powered tools. To address these challenges, the ARTT project is working with the National Public Health Information Coalition (NPHIC) to host a working group to develop a set of practical guidelines or best practices for public health communication professionals. These new guidelines will encompass different AI technologies and actual use cases for communicators. The working group aims to share a draft of the guidelines for feedback in 2024. Read more about the Ethical Use of AI in Public Health Communications Working Group.