that is not what the paper says, rather, it’s quite simple to avoid, once you know what you’re looking for:
4
0
13
No it doesn't. Changing the incentives in training can only reduce but not eliminate the probability of error. There's an inherent lossiness in this type of algorithm, and there's inherent structural limitations. If the current algorithms were good enough, longer training would already have worked
1
0
2
The problem is, if you avoid it that way, what it produces will no longer be what most people want. The training incentives are the way they are for a reason. People love simple confident answers.
0
0
1
That is not what the entire article says. That concluding statement is that it could be made less worse, not that the problem can be completely solved. Go back and read the whole thing instead of the last paragraph.

And nothing simple about the proposal. Many models are getting worse.
2
0
7
Who cares what the article says?
0
0
3
it is true, they used math in the paper, lol

i recommend reading the *paper*, not the skewed summary of it
1
0
6