LLMs are great at doing what I call "vibe hacking," which is giving users the false impression—the vibe—that they're accomplishing way more than they are. It's part of why I'm so skeptical every time someone tells me how much better a new model is. Better by what metric? Anecdotal estimation?
1
0
30
I keep thinking about this study showing programmers who used LLM assistance *believed* that doing so reduced the time to completion by about 20%. However, on average they took 19% longer than the programmers who didn't use the LLM. They just had the *vibe* that they were going faster.
1
0
17
So when you say the new models are "better," by what metric are you measuring that? Are you just estimating based on your personal experience? How do you know your mental heuristics haven't been hacked by a digital con artist?
1
0
13