No LLM has any perception of meaning. Underneath it all, the way they work is "Based on this reference data, show me something that looks like a statistically probable answer to this prompt"

"How many R's are in strawberry?" It don't know what an R is.
1
0
14
For answering questions, LLMs could be helpful for finding a needle in a haystack, but for anything important any positive return needs to be verified, and any negative return should be ignored.
0
0
5