So, according to OpenAI, their product is worse than useless, and so is everyone else's.
3
0
250
consider looking at the paper for 1 or 2 seconds
1
0
2
Having read the paper, it kind of confirms that LLM output can tell you what "many people are saying", but it's only likely to be factual if "many people are asking" the question so the AI's trainers can fact-check those things, and if they know enough to be correct at fact-checking.
1
0
1
it’s ok they’ll just need to take even more of society’s money, energy, and water to make a second LLM that can quality check the first one, that will be its dedicated purpose, there’s no alternative
3
0
121
And that it cannot be fixed.
0
0
3
I believe that that is what they refer to as a "reasoning model"
0
0
0
What I think could work is that their LLM understands the question and then connects to a math API to deliver the answer. Then again, people could just skip the LLM and save money and energy.
6
0
28
There’s utility in narrowing bands of uncertainty at the very worst!
0
0
1