When ChatGPT and similar systems started being available, people noticed right away that they could provide completely wrong answers. But they would do so in language that was so confident and plausible (because that is how they are designed).
Some people started to say “ChatGPT lies about information”.
But somewhat immediately, people started pushing back and saying that it isn’t “lying” because that implies sentience or consciousness. Say it is “lying” is “anthropomorphizing”, i.e. attributing human behavior to something that is very definitely not human.
Instead, some people said, let’s refer to this false information as “hallucinations”, as that is in fact a term used in AI research. So we say instead “ChatGPT hallucinates information.”
I personally like that term. It provides a way to explain to people that these AI tools just make stuff up!
But, as noted in this excellent Ars Technica article by Benj Edwards (that you really need to read to understand all this!), the use of “hallucination” has two issues:
- It also is anthropomorphizing and ascribing human behavior to a non-sentient / non-human thing.
- More importantly, saying an AI “hallucinates” has a nuance of being excusable behavior. “Oh, yes, Fred was just hallucinating when he said all that.” As if it was just random memories or a trip on some kind of drugs. It lets the AI creators off the hook a bit. They don’t have to take responsibility for their errors, because “it’s just the AI hallucinating”!
Which is fine… I can go along with that reasoning.
But… the author then suggests instead we use the term from psychology of “confabulation”, as in:
”ChatGPT confabulates information”
Hmm. While I get that “confabulation” may be more technically accurate, I think it still has the issues:
- It is still anthropomorphizing.
- It still lets developers not take responsibility. “Oh, it’s just the AI confabulating.”
But more importantly… “confabulation” is NOT A WORD PEOPLE REGULARLY USE!
At least, people who are not in psychology.
If we as technologists want to help the broader public understand these AI systems, both their opportunities and challenges, then we need to speak in plain language.
I do think we need to go back to the beginning and just say “ChatGPT lies”.
This has two important aspects:
- All of us understand “lying”.
- It puts the responsibility on the AI system - and its developers - for “behaving” that way.
Yes, it’s anthropomorphizing. No, ChatGPT and other AI systems are NOT human or sentient. No, they can’t really “lie” in the human understanding of it.
But we can use that term to help people understand what is happening here.
ChatGPT and other systems are lying. They are NOT giving you true information.
Let’s call it like it is.
——
P.S. It turns out that Simon Willison, who has been diving deep into the world of AI far more than I, has written something similar: “We need to tell people ChatGPT will lie to them, not debate linguistics” - please read Simon’s post for a another view!
——
Image credit: from Bing Image Create (DALL-E) using prompt “create an image showing an AI that is hallucinating”
If you found this post interesting or useful, please consider either:
- following me on Mastodon;
- following me on Twitter;
- following me on SoundCloud;
- subscribing to my email newsletter; or
- subscribing to the RSS feed
Comments
You can follow this conversation by subscribing to the comment feed for this post.