In this case, it is just like humans.
2
0
5
Humans, at least some, are capable of admitting they don't know the answer. LLMs are not just incapable of that, it's one of their fundamental properties
1
0
2
This is untrue. Often, if you correct one, they aknowlege that what they had said previously is incorrect and try again.

It is important to see them as chatbot robots, no more, no less. This means not expecting them to be correct and checking their info, just like anything else.
1
0
2