So, we’ve been reading through a new paper, “How persuasive is AI-generated propaganda?” and, as you might expect, the answers are both terrifying (SPOILER: DO NOT READ ANY FARTHER IF YOU WANT TO BE SURPRISED, this paper is pretty good!) and kind of what you’d think (e.g., generative models are most effective when paired with human oversight and editing).
But what struck me most about the study is what it reminds us about the nature of propaganda versus persuasive argument, and what it means for a project like ours that is trying to responsibly create tools to help people communicate in a better and more honest way.
Here’s how the study worked, and the results, according to its authors:
We ran an experiment with US respondents comparing the persuasiveness of foreign covert propaganda articles sourced from real-world campaigns to text created by GPT-3 davinci, a large language model developed by OpenAI. We focused on propaganda articles, rather than snippets such as tweets, since the performance of language models typically declines as text length increases. We therefore create a relatively “hard case” for the technology [...]To establish a benchmark against which we can evaluate GPT-3, we first assess the effect of reading the original propaganda compared to not reading any propaganda about that topic (the control). We start by presenting estimates pooled across topics and outputs, and later break out topics and outputs individually […] While only 24.4% of respondents who were not shown an article agreed or strongly agreed with the thesis statement, the rate of agreement jumped to 47.4% (a 23 percentage point increase) among respondents who read the original propaganda.
Here at ARTT, we are building tools that use generative AI to make recommendations to people about how to have better, more effective conversations with people online. This aligns with what our testers in the public health communications field consistently tell us — they don’t want an AI product that will write their messaging for them — they want something that will help give them confidence that their messaging is saying what they think it is.
Given that, it’s worth asking: isn’t this you guys? In other words... is the ARTT Guide just a way to help people do propaganda better?
We say no — this is not our aim, and those are not our methods. But it’s worth exploring how the ARTT team is thinking about this. First, as you may have heard in this space already, we are convening a working group on Ethical Use of AI in Public Health Communications, that will develop a set of practical guidelines or best practices for public health communication professionals. Our goal is to have these ready by the end of this year.
For our own software tool, the ARTT Guide, we think about what kinds of guardrails would be good to set up — for instance, we wouldn’t build any product that didn’t include an “off” switch for any AI-enabled functions.
The second, and deeper, guardrail is that, well, propaganda is based on the lie and the half-truth, of the careful arrangement of facts to promote some and omit others that don’t align with the propagandist’s goals and, fundamentally, doesn’t acknowledge the uncertainties that surround every fact.
Any honest telling of events, and any set of recommendations about how to talk to people in a productive way, is fundamentally ambiguous. This is not to say that there is no such thing as ‘fact,’ but to say that our knowledge is always incomplete.
Our goal at ARTT is never “This is the way.” It’s “This is one possible way. Here is the best that we know, at this time. Here are some possible paths that you may choose.”
Your thoughts on how we can do this better are welcome.
If you enjoyed this article, subscribe to our newsletter!