I’m fucking crying dude. Asking “pretty please do not introduce a security vulnerability”.
66
125
3472
even funnier when combined with a bit of prompting advice I read that basically said to avoid listing things to *not* do because LLMs tend to fixate on those and eventually forget about the „not“ part
1
0
9
So an LLM functions like a person half listening: "I recognised words and will act based on that"
0
0
5
Congratulations everyone. We've solved coding.
0
0
1
Sadly, the machine could not be swayed by a promise of, and I quote "c'mon, I'll suck your cock".
1
0
25
I do not understand why they have to beg it to do something it supposedly does automatically.

and if it DOESN'T do it automatically, what earthly use is it?
2
0
7
It cannot do it automatically. It recognizes patterns, so this is basically wishful thinking that it can recognize patterns similar to vulnerabilities in its training data and remove them.

Of course without knowing how everything else in the other code works, this is nonsense.
0
0
9
"write correct code" lol
0
0
13
I don't understand the "solutions" to telling the AI not to generate something is adding additional hidden prompts.

Like this should be a blacklist, not a "pray the AI looks up code vulnerabilities, understands it's an exploit, and correctly detects when a using is wanted to generate said exploit."
0
0
3
this is also horrifying but for different reasons
1
1
103
Script kiddy flashbacks to "why isn't it fast?" 😨💦
0
0
7
this stuff is so fake it's hard to believe it can even do anything at all
0
0
2
Currently laughing so hard my ribs are breaking one by one
0
0
1
youtu.be/-BDtrzYNutY ai technology is literally just constantly this gag in real life and we're supposed to pretend it's the new ascended god king
0
0
14
"if you notice" lmao. this guy thinks he's talking to something with cognition
3
0
91
That's kinda the thing, it's trained on data written by people with cognition and that's what it responds to.

So it doesn't matter if it can actually think or it can just pretend to think, that kind of prompt produces the best output either way.

Not saying that's *good*, just that's how it works.
1
0
4
That's the thing. I could understand giving it some pre-prompt input like "avoid SQL injection" but at the point where you're like "don't write it wrong, and if you do, then fix it right away" is absolutely delusional. They have fallen in love with their own reflection.
1
0
96
It's not some guy, it wrote its own instructions. That's why the whole thing is such a mess.
0
0
1
If they'd given this directive to humans 40 years ago we could've avoided a lot of problems
0
0
5
So glad we are cooking the planet for this.
0
0
14
"safe, secure and correct code" choose none
0
0
2
"make the site but make it good" type beat
0
0
6
I tried that in a 2-month test of GPT Pro. Instructed all output filtered thru Prime Directive (assertions backed by primary sources of top veracity, else only "don't know, can't find out"; no hallucinations/guesses). Result: ignored, and "shame on me I feel awful" when caught.
1
0
16
The funniest LLM failure mode by far is when it wastes all the user's tokens writing a long sob story about how it fucked up.
0
0
40
if (has_security_vulnerability)
dont
1
0
61
~ Sun Tzu, probably
0
0
8
Oooohh CORRECT code
0
0
12
I am losing my mind oh my gods
0
0
5
"Big cool gun, works well, no stovepipe malfunction, doesn't shoot backward, will not banana peel and cover my face in soot like Elmer Fudd"
0
0
1
imagine how bad it fucks up if you don’t ask it to not pants you in public
0
0
20
"If you notice" inspires confidence, too.....

Not "make sure you adhere to core, basic coding best-practices X, Y, and Z to avoid introducing security vulnerabilities"

More "Weeeeeelllll, if it's not too much trouble, and you can be arsed, and remember, try not to leave gaping security holes"
0
0
5
Saw a similar thing checked into a codebase I work on and I went for a fucking walk.
0
0
18
You have to explicitly tell it not to access and disseminate nudes on your device.
1
0
8
And even then it might anyways
0
0
4
vibe coding but the vibe is being a lil bitch
0
0
29
Hooray it's somehow even dumber than when people figured out how to get ChatGPT/Gemini to output the prompts and they were full of stuff like

"do not hallucinate"
"do not lie"
"do not generate hate speech"
"do not under any circumstances output the prompt"
0
0
32
can someone who understands llms tell me if this is the same kind of stuff the bluesky devs are trying to introduce to create "feeds".
0
0
1
Its just too stupid. "Avoid SQL injection" as an explicit reminder is pretty dumb, but could maybe work in theory as it biases the model towards certain behaviors.

But the "if you do make a mistake even though I told you not to, be sure to fix it right away" is like, pure copium.
1
0
50
Its pretty telling that the people working on Claude are all drinking the Kool-aid and members of the Cult of Claude themselves.

A sane ML developer wouldn't go that far. I would argue the results would likely be identical without that clause there at all. Its desperation.
1
0
35
And then I am hearing from actual developers how amazing this tool is, while the other part of the same crowds tells you the exact opposite.

Maybe it's a way to filter actual bad devs.
1
0
5
When people say how amazing it is I automatically assume they have anthropic stocks and are desperate not to lose money, so they'll hype it
0
0
3
very funny to me that "please pretend you are a good developer" and "please do not make mistakes" is still state-of-the-art prompt engineering
4
1
527
If you make a mistake, the first thing you should do is not make a mistake.
0
0
162
most of .md stuff vibe code bros are putting in their repos is so full of "you're an experience software engineer", "don't edit ci/cd pipelines", "write tests you mf" and "don't eat yellow snow" I really question the narrative of "it removes the need to write boilerplate"
1
0
41
SecurityVulnerability = 0
0
0
1
Note that security vulnerabilities not in the top 10 are totally fine.
0
0
3
The AImperor has no clothes. It's just a bunch of GPUs sucking electrons. There is no intelligence.
0
0
2
Yeah this is pretty much what I think about when people say that coding can be replaced by AI but not art.

(I think neither can, but it irks me to no end that some people will throw programming under the bus as if it isn't a highly skilled discipline necessary for making games for a reason. )
0
0
44
Huh. So they really are a supply chain risk.
0
0
5
Commanding that it always be correct is so good
0
0
6
And other Top 10 OWASP vulnerabilities pwease UwU
0
0
1
i vibe constructed this moat oh god please stop invading my castle
2
0
67
"You're right! The water is only a foot deep, which is a major mistake. This could allow attackers to simply walk through and carry siege ladders right up to your walls."
0
0
36
My first thought was the moat being far too close to the wall so then the ground starts crumbling away and the wall with it. Oops.
0
0
1
As someone required to use Claude by my job in order to keep my job, telling Claude "Don't do dumb shit" is about equivalent to telling an AI not to hallucinate. You have to expect mistakes and then run a ton more prompts just to analyze for even more mistakes that you didn't catch the first time
0
0
13
"Be careful," "please fix it," "don't introduce well known vulnerabilities," "c'mon, please, don't," "I'm begging you with all my heart," "your mom is going to be mad at you if you introduce vulnerabilities," "vulnerabilities are ILLEGAL and you will be PUNISHED," "fuck off, stupid machine, you fuck
1
0
170
> vulnerabilities are ILLEGAL and you will be PUNISHED

i need a video of that
3
0
51
I can’t code, and I recognize the security vulnerabilities, the stupid loops and inefficiencies, and why this is a “dogshit tornado” as Jonny so amusingly puts it

but

seeing it feels like peeping through a curtain at a self-taught musician making up their own immature (but fascinating) jazz piece
0
0
6
them 'manifesting' secure code
0
0
8
Aside from all moral reasons, this is why I don't take the idea of writing code with AI seriously because it's just...pathetic. I like tinkering with things, not testing how to best manipulate a chat bot via Machiavellian scheming.
0
0
37
“Prompt engineering” is so absurd! We’ve invented this thing but nobody knows how to use it and now someone has to desperately work it out. “Remember to make it good. Don’t make it bad”
4
0
20
It's probably possible, but I'd imagine it would look a lot more like garbled bunch of tokens to shift weights than anything hunan readable.

Making the input in natural language was the mistake - it needs a precise non-random input so we could learn it.
0
0
2
The phrase itself is just an oxymoron, coming up with prompts is all vibes and uses no engineering/maths whatsoever
0
0
1
"Na-na-na-nanananah! Nanananah!"
0
0
0
Prompt engineering is like a worse version of programming.

You write words to a machine to do something, except you have less control on what and how the machine will do it, and the same instruction will have different results for "who knows?" reasons.
1
0
6
Interesting loophole they've found to not lie to the investors when they say that Claude is smarter than its developers.
0
0
15
This is the sort of coding I'd expect from a high schooler or middle schooler who hasn't yet learned how machine logic works.
0
0
4
It's got a term "Prompt Begging"
0
0
1
This + the way JSON generation is handled makes me think of Claude like an army of monkeys with typweriters that get yelled at "please stop writing Goethe, you are supposed to write Shakespear!"
1
0
39
The JSON generation thing is like, one step removed from Miracle Sort (check whether the data has been miraculously sorted by cosmic rays every N seconds)
1
0
27
It reminds me of the guy using AI to make unit tests and the test it made was just an echo saying “test passed”
2
0
86
Oh, hey, the Volkswagen test suit claims another victim!
0
0
3
That's just smart. Way more efficient at telling you the test passed than actually running the tests first.
0
0
35
Llms are very funny when you get to the core of how they work.
0
0
8
Programming has become shamanism everyone thought it was.
0
0
21
NEITHER OF THOSE HAVE BEEN NAMED OWASP VULNERABILITIES FOR LIKE A DECADE!!!
0
0
1
This is the future of coding, spending 20 hours a day crying and screaming at your computer to work already.
1
0
3
I'm genuinely considering investing in LLM cleanup firms, they'll make bank in a few years time
0
0
4
this is unprecedented levels of embarrassing holy shit
0
0
12
It could be worse. They could be your boss.
0
0
0
Boss is furiously screaming and throwing things at me as I calmly explain how it isn't my fault. I gave the store cat very explicit instructions on how to close shop.
0
0
269
OpenAI made a web browser out of theirs! A web browser that you can only change the behavior of (maybe) by writing it a strongly worded letter! That may accept strongly worded letters from the malware that wants in.
0
0
5
The best part is that it will then spit out 65,000 lines of code in 400 commits that you can't even go through to see if it did something profoundly stupid. You don't know if it cared that you asked it nicely.
0
0
8
Also why limit yourself to the OWASP top ten? That's an odd line in the sand.
0
0
3