AI Produces Better Results Than Humans. Why Don’t We Trust It?

April 16, 2024

How do we incorporate a sense of “personal” accountability into our projects that incorporate AI?

One of the ways in which our current AI-enhanced moment leaves me cold is the inescapable feeling that, as neat as the tricks can be, you’re ultimately talking to your toaster or another Internet-enabled appliance. This is a radical departure from ✨Social Media✨, our culture’s last big tech boom. There was something so special, especially in those early days of Facebook, Twitter, and other, forgotten platforms, with this notion that you could just instantly connect with all of these interesting people, from all over the world — famous people, unknown geniuses, even dogs, all interacting with li’l ol’ you

Nowadays, we have the opposite problem — we invited everyone and anyone into our house, without really thinking through what “everyone” meant, and now Bored Apes are rummaging through the fridge, a dozen reply guys are blocking the bathroom door chanting, “Actually,” and a large language model is hallucinating 🥴 on your couch. 

Still, though, the people we interact with online are, mostly, people, of one kind or another. According to two recent studies on AI-generated text, we prefer to get our advice from people — even if we acknowledge that our Internet toaster gives better advice. 

The first study, from researchers at USC, finds that humans rated AI-generated messages as better and more consistent than human-generated ones… except when humans were told that the messages came from an AI:

AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human) [...] In a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal.

An insight from the second study, from Benjamin Toff and Felix Simon, is further confirmation that people place less trust in information if it is labeled as generated by a Large Language Model:  

We test whether audiences in the US,  where trust is particularly polarized along partisan lines,  perceive news labeled as  AI-generated as more or less trustworthy. We find on average that audiences perceive news labeled as AI-generated as less trustworthy, not more, even when articles themselves are not evaluated as any less accurate or unfair. 

So, the challenge then is to understand how to react to a world in which AI can provide better results, but people trust the output less. 

One way to address this is to try to correct people, e.g., “Our AI overlords are much better at this than we are! Join them! IT'S BLISSSS!😍”. But that approach seems to be of limited utility. Instead, in trying to understand this phenomenon, I wonder if there isn’t something to consider about accountability. 

For example, your eccentric uncle Claude might give you bad advice, but you know he’s a real person, and that as a real person he’s accountable to you. Your government is also accountable to you, as is your doctor, your mechanic, etc. Claude the AI? Not so much.🤖 

And this, to me, is where AI projects have a lot of work to do. How do we incorporate that “personal” accountability into our projects that incorporate AI? How do we ensure that it translates to people who use our software? 

What this points to for our ARTT Guide: If we use AI as a supplemental tool people are fine with it, and it improves our work. But the work still needs to come from us, and not from a toaster.🍞

Let us know what you think.

Subscribe to the ARTT Newsletter!

If you enjoyed this article, subscribe to our newsletter!