LLMs are great at doing what I call "vibe hacking," which is giving users the false impression—the vibe—that they're accomplishing way more than they are. It's part of why I'm so skeptical every time someone tells me how much better a new model is. Better by what metric? Anecdotal estimation?
March 30, 2026 - 21:17 UTC
1
0
30