consider looking at the paper for 1 or 2 seconds
1
0
2
Having read the paper, it kind of confirms that LLM output can tell you what "many people are saying", but it's only likely to be factual if "many people are asking" the question so the AI's trainers can fact-check those things, and if they know enough to be correct at fact-checking.
1
0
1
Which, for some queries, is fine, like "turn on the living room light",
and for others, like "is white genocide real", an AI that's been trained by Elon Musk will not produce correct answers except by accident, and he'll try to reduce those accidents.
1
0
2