Thanks.

Actually paper is much different than the discussion here.

Many people seem to be hallucinating what they think the paper says without verifying the source.
5
0
53
You are right that the article is telling a very different story from the paper it's based on. If you plot the paper on the spectrum of opinion on this topic, it is optimistic relative to previous work — willing to predict "effective mitigation" and even "suppression of hallucinations."
1
0
19
"mitigation' - or simply don't use LLMs. I don't see the utility in any sphere of activity, unlike machine learning which is very useful.
But these are as similar as chalk and cheese.
0
0
0
always have been
0
0
0
I don't think that's true!
The paper supports the assertion that due to the way LLMs are currently created and evaluated, 'hallucinations' are an inevitable and intractable problem.

If we want to get rid of the problem, there needs to be a sea change in core programming and training methods.
1
0
50
And it talks about those. What a lot of people are equating this to here is that OpenAI disclosed LLMs are useless.

That is not what it says.
6
0
15
You seem to be comprehending this paper in a very different way from everyone else who is reading it - are you actually reading it, or are you getting an LLM to summarize the points for you?
0
0
3
As long as we're going to sources, let's compare this paper to a related one from a year ago with the same grant leader, but with a team that wasn't 75% OpenAI employees:
dl.acm.org/doi/10.1145/3618260.3649777
1
0
6
An interesting article. Link for non ACM subscribers.

arxiv.org/abs/2311.14648
0
0
1