i've asked an llm one question, about something i know about, and it made up a bunch of incorrect bullshit that i knew to be false. what am i supposed to do with that
54
7
2172
What we think of AI is remarkably valuable in very, very small applications but the current bonanza of misplaced confidence in a tool that ends up costing businesses more money than their labor overhead just for the chance to fire humans is obscene.
0
0
1
Regurtitative is a more apt word than transformative.
0
0
0
It's not good at that. It's good at a bunch of secret shit no one will talk about openly, for some reason, but if you didn't suck so bad, you'd figure it out yourself you moron.
0
1
2
Did you ask it the frog riddle?
0
0
0
Yesterday the search summary told me the Rubicon Trail (in California) is named that because it crosses the Rubicon River (in Italy).
0
0
5
I asked it to "check if this Chinese sentence is correct" and it just added a bunch of commas. Oooh, the future!
0
0
0
Seems like your LLM is working on a novel instead.
0
0
1
Also, if you just got better at using AI it would stop being a massive drain on electricity and wouldn't run on stolen data. Those are somehow skill issues.
0
0
0
Shaun, don't you see, you forgot to ask it nicely to please not be incorrect.
0
0
16
AI is stealing jobs from actual artists. My feelings about it begin and end there.
0
0
1
You need to learn to ask it questions you aren't already informed about. That way you can just accept what it says as correct and have your life transformed by this alternative to actual research.
0
0
48
It’s good as a search engine
0
0
0
He's an economist at the Brookings Institute, so he feels a certain kinship with LLMs since they also take money for the service of being wrong about everything.
0
0
42
What did you ask it?

I asked it about whether Israel is a colony, and it gave a "there-are-valid-points-on-both-sides" answer.
0
0
0
You invested exactly the correct amount of time and discovered what they mean by "transformational". You can call that a win and walk away. Or you could make a four hour video about it. Either is good.
0
0
1
Heck, what are you supposed to do with the knowledge that if enough people posted that 1+1=3 consistently over a long enough period of time, eventually LLMs would start returning that as a "fact*?
2
0
0
Posting debunking in volume using similar wording but the real answer with more variety in language is enough to do that.

I ran into that one time I didn't catch that a search engine AI response wasn't the sample of a webpage. It confidently made the easy mistake a human would immediately catch.
0
0
0
It has no idea what it's outputting. So you can rerun the same prompt asking for an numerical answer, and it'll change practically every time you run it.

Because it's a prediction of a response, not an actual answer. Imagine running a conversation just by using the middle option of predictive text.
0
0
1
I love Google's AI telling me something and then linking a source that says the opposite

it getting it to change its answer entirely by altering the search terms I used
3
0
31
My experience with these sorts of tools is that if you ask it the same thing twice, it gives different results. They could contain the same facts with different language or different facts.

I don’t see why that’s not a huge red flag to people.
0
0
2
If I wanted to interact with something that didn't check it's own links I could argue with transphobes we didn't need an ai for this.
1
0
4
The two positions are actually

1. “Wow, it’s magic! It can teach me things I know nothing about, and do things I can’t”

2. “I know how it works, and whenever I ask it about something I’m educated in, it’s wrong in a lot of obvious ways. It’s entirely useless”
0
0
16
I was doing some very basic data analysis at work and the biggest AI fan in the company was very impressed by what I was doing. It made a lot of sense why they were so excited by something that pretended to be clever
0
0
0
Here's my thing. Even if Al was the miracle machine they're peddling it as, I still don't think I would have any desire to use it?

I don't have any desire or need for a machine that thinks for me, and any time 'saved' would never be worth the lack of direct input.
0
0
1
they're spending billions of dollars to make a robot that replicates that one guy you know who just starts making shit up that sounds right if you ask him anything rather than saying "I don't know", and then they try to sell you on how impressive it is that it answers questiosn
1
0
96
The best short description of genAI I've ever seen is "Mediocre White Guy as a Service".
0
0
10
it's funny because a huge chunk of LLM evangelists are people who refuse to
"learn how to use" their brain.

Learn to code? Nah, just ask ChatGPT.
Learn how to make art? Nah, just ask ChatGPT.

It's just a way people think they can bypass the effort stage of learning in by spending money.
0
0
20
LLMs are bad question answerers, but are sometimes decent "structured text outputters".
Their uses range from "almost none" to "you can use it to spit some boilerplate code in a commonly used programming language that it would take you ~50% longer to write than it takes to prompt and verify"
1
0
1
Their evils broadly outweigh their utility* though, and the hype surrounding them is completely made up to drain investor money.

(*) Their utility is mostly "it MAY shave an hour or two off your 60 hour coding project unless you use it too much because then it will instead add like 10 more hours".
1
0
1
I asked Google AI how many B's there are in the word blueberry, it said 3.

I then asked Meta AI the same, it correctly stated 2, then gaslight itself into thinking there were 3 when I told it Google's response to the same question, and then ended up telling me there's only one B.
2
0
76
Google's AI overview told me the song "No" had three letters the other day when I was searching for song titles with just three letters.
0
0
2
Additional fun.

Run that multiple times on the same model. The answer will change.
0
0
23
Listen. We don't really know if there is still water in the pool on the titanic. Nobody has been in it to check. Don't think that AI is wrong just because you assume there's water at the bottom of the ocean.
0
0
0
i try it once for repair my jellyfin server taht not booted. i edned with a total coruption of it. so hu yhea. its shit.
0
0
0
You keep asking it the question until it gives you the answer you like, upon which you say "Yes, computer. Good computer" and pat it on the head.

You will pay for the privilege of training the AI to not be shit.
0
0
1
Some people argue it's only -sometimes- wrong because it occasionally makes things up, but I'd argue since it fundamentally doesn't understand what it's talking about and just regurgitates probable text, it's -always- inherently wrong.

At best it's accidentally right, but that's meaningless.
0
0
9
"A bunch of incorrect bullshit" explains the average economist's profession
0
0
1
I had a train at work for microsoft copilot and the guy doing it said you could ask it for help on how to do all the weird stuff on Microsoft word. So anyways he asked it for help on how to update a table of contents and it told him how to do it wrong
0
0
13
In college, one of my politics professors recommended we used AI to look up EU policies for a project. So, I did. Everything it gave me was completely made up and I wasted 2 hours searching for policies that didn't exist and therefore wasted all that time. Never bothered with it since.
0
0
0
When you're past the Barnum effect, you understand it's just a (bad) guessing machine
0
0
5
Pretty sure #2 has a lower barrier to entry since it requires not understanding how LLMs work while also pretending no one does.
0
0
3
I think LLMs are good at first drafts or revisions, where your personal tone isn't important. I wouldn't use them for information gathering. I'm not convinced they're transformational or are a good product per cost, but they are useful.
1
0
0
writing tip: there's no part of the process in which the writer's personal voice is unimportant
1
0
8
you're supposed to ooh and ahh then pocket your check from sam altman I think
0
0
64
I had a quick peek at his other stuff before blocking. 4 days ago he was like "The US would be in a recession without AI" and like

you know that's bad right

that's a bad thing you just said
1
1
30
On that point he may not be wrong, which is how an economic “bubble” gets created

People are over investing by billions in a technology that is significantly overvalued

There is always a certain amount of hype in the total market but this amount in one sector is huge

Subprime loans anyone?
1
0
17
It is very confident in telling you whatever. I cooked up the name of the "Donald J. Trump Prize for Peace and Prosperity", that you will not find anywhere else on the internet, but google's AI tells me that "The phrase is tied to several efforts and events:" which it is not; I made it up.
0
0
0
It was transformational. It transformed incorrect information into a confident answer
2
0
140
Psh I've been doing that for years and no one has given me any special consideration
1
0
54
You have used a screwdriver as a hammer. Sadly it was advertised to you as a great hammer. It's not a hammer. You can sorta use it as one but only kinda.

Ai/screwdrivers are great tools when used properly. They are unfortunately advertised as being easy. Ai is not easy if you want good results.
0
0
0
like if i have to spend a bunch of my time teaching it to not be wrong, it's not really a tool helping me, is it. i'm a tool helping it
19
6
1566
This is why I never tell Akinator the answer
0
0
3
If I can chose between spending my time double checking something an overblown algorithm said, or just read a well researched book about the subject, I am picking the book.
1
0
4
The problem is you can not teach it to never be wrong. There isnt ever a point where you could trust it like somebody who learned the job they have been doing for years on end. There will always that bit of randomness in there
5
0
108
Said about people, this is unhelpful. Said about an LLM, this approach makes complete sense.
0
0
0
I'm 99% sure that properly googling something is faster, more informative and accurate (plus better for the environment) than 2 attempts at asking an AI.
0
0
0
3rd position:

I have extensively tried to understand A. I., and it mainly works to do the work we shouldn't be doing, as evidences by how low quality the acceptable results are.

It is either proof the task is best left undone or woefully insufficient.
0
0
10
They don't have to steal content if people are willingly training it
0
0
9
thinking of all those "vibe physics" dipshits who will acknowledge they have to spend a lot of time correcting fundamental mistakes it make that they recognize, then turning around and just assuming it's correct when talking about theoretical shit they don't understand
0
0
0
Being generous to LLMs, I think that there are specific cases where it's the right tool for the job and has real utility.

But if we only use LLMs when it's the right tool for the job, it's not a tool worth trillions of dollars.
1
0
1
Well, it was a poor use of the tool.

The problem with it that I see is that it got marketed as something it is not to generate hype and the main US companies are not working on making it energy efficient (because the bubble depends on forever growth of data centers)
2
0
1
If anyone here is a tool its that Justin guy
0
0
3
skill issue! you see, you need to feed the answer *into the prompt*
0
0
0
i've asked llms for help debugging code to great effect. i've learned quite a bit from being able to ask more complex/specific coding questions than i can usually find good answers to if googled (but which can be verified after the fact). point is, llms are good at some things, bad at others
1
0
0
Bingo! They wouldn't be making this stuff available free of charge if they didn't need us using it for some reason and I think that's the main one.
0
0
2
This is how I feel about self-driving vehicles. So much time, energy, and money is being allocated to teaching a vehicle to do something a human can do with fairly basic effort, in order to solve a problem that does not exist.
0
0
0
That wouldn't work. Because of how it generates output, it's not actually checking anything. Otherwise, ChatGPT forcing the prompt to also inject a bunch of websearch results would prevent it from "hallucinating" in the first place.

Which it clearly doesn't. d=P
0
0
0
Was "encouraged" to use it to be more productive in coding. It produced code that didn't work. I fixed it, and, "investing the time", told the AI, which replied "oh right you are, sir! Here is an update." I tried it, didn't work, told it to try again and it said it didn't have the library needed.
1
0
1
can't for the life of me remember where I saw it, but using these AI tools actually hurts productivity significantly because of this. because you have to spend more time fixing their autocomplete outputs than it would to just do it yourself.

these things make people dumber, on purpose I guess.
2
0
13
That's definitely one of the top 3 reasons I refuse to use AI to help my (incredibly fact-specific) work.

bsky.app/profile/courtana.bsky.social/post/3lwbtx54zss2v
0
0
9
it is *not* a research tool and i don't know why everyone wants it to be
0
0
1
It fucking sucks. I'm trying to get new flowers and plants for My garden and need to make sure they arent toxic for my cat and I have to always ignore what Google's AI saya because its usually wrong. Its incredible dangerous
0
0
1
As an economist, he understands the value there is in being completely wrong a third of the time and hedging so much the answers to the point they're useless other third but being extremely confident of being right long as it makes the people with money happy.
0
0
0
I learned everything I need to when I once Googled to remember a command in command prompt and the AI response said it didn't exist while the very first result just was the command.
0
0
1
ask it about something you don’t know anything about , whenever i’ve done that the answer seemed legit
1
0
7
I never understood these arguments abotu "learning AI" because the whole pitch is that you write in natural language and don't *need* to know how it operates. Interestingly, I have yet to see anybody elaborate beyond "write better prompts!"
0
0
4
well, that's exactly what he says. one interaction vs a lot of interactions. it's a great tool to use as a support in areas you know about, at least in the ones I do (Econ and Maths)
1
1
0
The machine that cannot do 1+1 consistently is not good for maths, and revenue-cost= profit does not mean you work in maths either

It makes sense that an economist would defend chatgpt, you invent even more shit on a regular basis
1
0
1
he loves ai so much he phrased his post as a numbered list 😌
0
0
0
It’s entirely because AI does not think. It can only predict.
It is kind of like only using your phone’s predictive text to write an entire response.
0
0
0
Never use it again would be what I would do.
0
0
0
The technology was intended to be a text transformer to enable better natural language processing and help analyze something like an existing corpus of data not be a wishcasting genie like these guys want it to be. But they gotta make it make money somehow
1
0
9
I think it's a moderately cool technology for its *very* limited real use cases: parsing search inputs to improve access to information, grammar checking according to standardised language and a few other things like that. I do not understand the hype for what's basically a word calculator.
0
0
4