It's a bad attack on LLMs though, because it conflates the ideas of "can this task be done without critical thinking by some system" and "do *humans* need to use critical thinking to perform this task". Both can be true, and the latter is all that's needed to decide learning tasks *for humans*
1
0
8
It's the same error that people make when they assume that because an LLM succeeds at a task humans need to use their full mental powers to perform well at, that means the LLM matches or exceeds those human mental powers, just in the other direction.
1
0
6
It also devalues human intelligence IMO. Like, the long division algorithm is a very simple thing computationally. There is nothing hard or impressive about a computer doing it. A human child, using a brain THAT NEVER EVOLVED FOR THIS, learning to execute and utilize that algorithm is a marvel.
0
0
6