The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!
26
26
1158
Indeed. It's a machine that doesn't work.
0
0
0
I think anthropomorphism is the root of most of the problems with AI. People talk about it thinking, feeling and as you said hallucinating when it does none of these things and they end up treating AIs like knowledgeable people because it produces plausible looking output with absolute confidence.
2
2
55
People anthropomorphized ELIZA (way back in 1966). Now imagine the lure of a vastly more sophisticated one (with still no theory of mind). Probably a better therapist than ChatGPT. Or at least safer. en.wikipedia.org/wiki/ELIZA
0
0
0
I think it would improve the situation if AIs were banned from using the first person in responses or using any terminology that reflects a human mental process like when they processing and put up a message saying something like “thinking” during the delay.
4
2
43
That's a fair point

Revising how I insult this useless bullshit
0
0
0
I liked the idea of calling them mirages. Removes intentionality, and evokes the errors' plausible-seemingness and fundamental unreality.
0
0
0
Ive explained professionally that all GenAI output is hallucination, but some hallucinations happen to align with the querent's confirmation bias.
0
0
9
It’s not just an error, its error asserted as accuracy: it’s bullshit.
0
0
1
I hate that it has become the term of art, and that people will push back if you try to assert that it’s a categorically inaccurate word to describe genAI not producing the answers people want!
0
1
6
my hallus were no dreams^^
I agree, anthropomorphising wrongly predicted results is bad, but 'describing things that arent there' fits
error imply hallucinations are result of a program error, instead of an inherent result of how the word prediction shit works bc theres no truth, just a close enough
0
1
1
Yeah, these LLMs are not great but the marketing has been what keeping them afloat. Like so many AI projects, they are just one more version from working correctly (for 10 years).
0
0
1
I find it helpful to say that they are structurally indifferent to truth. Even when they're right it's incidental to the process
2
1
108
"All models are wrong. Some are useful."
--George Box
1
0
0
It has been mind boggling to me that this has been an obvious, understood limitation with language models since they were first conceived as a concept and yet here we are, with very serious people shoving LLMs into applications where truth can be life or death relevant.
0
0
21
‘Machine learning’ does the same thing. It’s just curve fitting.
0
0
0
Also the things it hallucinates are categorically the same as the things it says correctly. It's all the same soup
0
0
1
You know how much governments and corporations love their euphemistic terms. I mean, now I'm sure you know this, but for anyone else who may not know, back in the 1950s, the U.S. government had "sunshine units", which was the fun term for strontium units which is how we measure nuclear fallout.
0
0
0
Ironically hallucinating is the most interesting thing AI does and they're stamping it out lol
0
0
5
and there is an existing work Confabulation that they could have used that would be a closer corollary to what is happening.
0
0
1
"Hallucinate" is wrong but not for the reason you give.

When an LLM makes up a sentence that sounds plausible because the sequence of words has a high probability given the context provided by the prompt, it is performing exactly as designed.

It's not a hallucination, it's just a plausible madlib.
1
0
9
Needless to say, selling plausible madlibs as The Solution to Everything is ... interesting.
0
0
4
It also treats being right or wrong as though they're different. When it's wrong it hallucinates, so therefore when it's right it's putting actual thought into the answer. When no, right or wrong is the same process of statistical generation. Sometime what it generates happens to align with reality.
0
0
1
An LLM that’s “hallucinating” is doing the exact same thing as one that’s generating “correct” output: predicting the next most likely word based on its input and training.
0
0
1
Even if we're reusing neuropsych terms, the behavior is far more similar to confabulation (making up shit you don't know) than hallucination (perceiving things that aren't there)
1
0
2
Confabulation occurs when individuals mistakenly recall false information, without intending to deceive. Patients with amnesia often fill in the gaps with confabulation. I myself did it in the past to hide DID-related amnesia.

AI have memory and makes statements, but no intentions. It fits.
0
0
1
even 'fabricate' would be an improvement because it successfully connotes 'made this up out of thin air'.
1
0
8
Excellent point. They are errors.
0
0
1
Atlassian's AI doesn't hallucinate; it uses shorthand. 🤡💩
0
0
0
"Is this the real life! Is this just fantasy?..."
0
0
0
that has had my gears grinding since i first heard that applied 😡
0
0
3
If I told my boss I messed up because I was hallucinating, I'd be fired on the spot and be sent to a farm upstate.
0
0
66