Nope, it's correct. A LLM has no concept of right or wrong: it literally hallucinates an answer that's likely (from it's training data); sometimes it matches with reality, sometimes not. Whether it does or not is completely unrelated to the process, and more of a function of the sources collected.
September 21, 2025 - 17:14 UTC
0
0
9