The problem is you can not teach it to never be wrong. There isnt ever a point where you could trust it like somebody who learned the job they have been doing for years on end. There will always that bit of randomness in there
5
0
108
It has no concept of right or wrong, only positively and negatively reviewed sentences. An LLM is a language model not a truth model. If anybody wants to try making a truth model then that might be interesting but an LLM can't become a truth model just by hoping it will.
0
0
8
So it's worse than the example because you could become a tool helping it and it will never actually be reliable.
0
0
1
Yeah! The wild card! Like Charlie! Exactly how things should function!
0
0
1
it's part of the design!
0
0
25
Yup. pretty useless. If calculators spit out bullshit numbers we, rightfully, wouldn't use them.
0
0
2