Nope, it's correct. A LLM has no concept of right or wrong: it literally hallucinates an answer that's likely (from it's training data); sometimes it matches with reality, sometimes not. Whether it does or not is completely unrelated to the process, and more of a function of the sources collected.