No it doesn't. Changing the incentives in training can only reduce but not eliminate the probability of error. There's an inherent lossiness in this type of algorithm, and there's inherent structural limitations. If the current algorithms were good enough, longer training would already have worked
1
0
2
Yes and there's no reliable way to make it always admit it's not sure because there's no reliable internal metric of certainty

No, not even that entropy metric they found recently. Ironically that particular measure would make it worse at applying logic to obscure statements
1
0
1