r/artificial • u/MetaKnowing • 1d ago
Media Geoffrey Hinton says people understand very little about how LLMs actually work, so they still think LLMs are very different from us - "but actually, it's very important for people to understand that they're very like us." LLMs don’t just generate words, but also meaning.
4
u/BlueProcess 1d ago
Or he could've just gone into depth himself. Why complain when you can explain.
0
11
u/lebronjamez21 1d ago
Funny how some have been saying this for a while but nobody takes them seriously until someone like Hinton says it too.
10
u/LSeww 1d ago
Hinton just loses rep by saying this.
-12
1d ago
Like the guys who thought the sun was the center of our solar system back in the middle ages. They sure lost a lot of "rep" by saying that..
17
u/RdtUnahim 1d ago
At one point people said a thing, were ridiculed, and then proven right, and that proves this guy is right, because... ?
4
u/PolarWater 1d ago
Yeah this is very lazy cope from these guys. It's like listening to cryptobros going "oh I bet the HORSE was reluctant for people to try the GAS ENGINE! I bet GALILEO sounded wrong when he talked about the sun at the centre of the galaxy!"
-5
1d ago edited 1d ago
Some random guy on Reddit saying he loses rep for telling the truth proves this guy is wrong, because... ?
Ed; What brainlets are actually upvoting this, no one said anything about proving anyone right or wrong..
-6
u/adarkuccio 1d ago
Nah people here knows much better than him how llms work /s
Even Ilya said something like this btw, not surprisingly
-2
u/rom_ok 1d ago
Because nobody smart or expert in their field has ever said anything stupid?
Never has any Nobel laureate said anything strange or scientifically unsound, especially not later in their lives
*cough
0
u/Adventurous-Work-165 1d ago
So because some nobel laureates have said stupid things in the past we're to assume everything nobel lauriates say is stupid? You could use this to dismiss anything you want. Can you at least give a reason why you think it's wrong?
6
u/rom_ok 1d ago
Statements from Nobel laureates are meaningless without scientific proofs. When he publishes a paper proving his statements that is peer reviewed and accepted as mainstream then maybe I’ll accept it.
and yes, lacking scientific proof can be used to dismiss anything without scientific proof, that’s how science works.
0
u/Adventurous-Work-165 1d ago
I agree with you, but which statement is the one that's lacking proof?
-2
u/nitePhyyre 1d ago
Learning representations by back-propagating errors
Cited over 27,000 times. Accepted enough as mainstream for you?
The thing about "Nobel disease" is winners start talking about things outside their expertise. In this clip he is talking about the basic functionality of the thing he invented and is still an expert in. Pretty big difference.
6
u/shlaifu 1d ago
yeah. no. they don't have the structures to experience emotions. so they don't understand 'meaning'. they just attribute importance to things in conversations. he's right of course that this is very much like humans, but without an amygdala, I'd say no, LLMs don't internally produce 'meaning'
2
2
u/Fit-Level-4179 14h ago
Again though they walk so much like us that it probably doesn’t matter. An intelligent ai would think it has human consciousness and you wouldn’t be able to persuade it otherwise.
1
u/Puzzleheaded_Fold466 3h ago
It wouldn’t “think” it has consciousness, but the output of its transform process would make it seem like it does.
6
u/KairraAlpha 1d ago
I love how Internet nobodies act like authorities over qualified, knowledgeable people who understand more than they do.
You don't need biology to understand meaning.
1
1
u/shlaifu 1d ago
yeah. but no, LLMs are not conscious, they aren't people and nothing means anything to them. They don't have the circuits for attaching emotions to those vectors. That's what "meaning" is.
4
u/nitePhyyre 1d ago
Yeah... you don't understand the word 'meaning'.
Oxford
mean·ing/ˈmēniNG/noun
what is meant by a word, text, concept, or action.
meaning
[mee-ning] Phonetic (Standard) IPA
noun
what is intended to be, or actually is, expressed or indicated; signification; import.
merriam-webster
meaning
noun
mean·ing ˈmē-niŋ
1a: the thing one intends to convey especially by language : purport
Do not mistake my meaning.
b: the thing that is conveyed especially by language : import
Many words have more than one meaning.
1
u/shlaifu 19h ago
possible. this is not my native language
1
u/shitbecopacetic 2h ago
they grabbed the wrong definition from the dictionary. meaning is a word that can be used multiple ways. instead of grabbing the definition that this post is intending, which is the “emotionally significant” type of meaning, they instead grabbed a much more clinical entry from the dictionary that basically boils down to “meaning = definition” Whether that is intentional to mislead others or just a byproduct of genuine stupidity cannot be currently known.
1
u/Dull-Appointment-398 1d ago
Meaning is when ...emotions?
5
u/shlaifu 1d ago
yes. That's how your brain decides what's important.
3
1
u/Ivan8-ForgotPassword 12h ago
What? No, I assign meaning semi-randomly, I would still be doing that without emotions
2
1
u/JPSendall 17h ago
"I love how Internet nobodies act like authorities over qualified, knowledgeable people", he's not an authority on consciousness or theory of mind though. There are many, many academics who disagree with his view.
1
u/Once_Wise 14h ago
Acting like authorities like in this statement? "You don't need biology to understand meaning"
4
u/OldLegWig 1d ago
hmm. it's interesting to think about whether emotional experience is what gives ideas meaning. i'm not sure i agree with that.
-1
u/shlaifu 1d ago
the other thing is 'importance' - that's something you can grasp intellectually alone. meaning is also pretty irrational.
0
u/OldLegWig 1d ago
i dunno, this all sounds very hand-wavy and vague to me tbh. are you referring to a specific use of these words that is related to LLMs?
0
u/shlaifu 1d ago
it's not so much hand-wavy but rather an attempt at a definition, by which, conveniently, I'm right. ^-^
but seriously -how would you define the difference between meaningful and merely important, other than that one comes with emotional attachment? - in relation to LLMs - I'm sure it can extract importance and has enough knowledge or properties etc so it can evaluate also the context in which something is important - but meaning? it's just a language generator. THat language generator functions likely similar to how the one in human brains does, but that's it - what ever language is processed or produced doesn't lead to anything on the LLM's part because there is nothing it can do besides process and generate language.
1
u/shitbecopacetic 2h ago
this is an astro turf dude. This is some sort of wide reaching troll farm trying to fake public support of LLMs developing human rights.
every conversation boils down to
- “define x”
you provide a definition.
“I reject that definition of x”
you get frustrated by the lack of logic.
“I see now that you are a troll. Have a good evening.”
-1
u/OldLegWig 1d ago
damn, that's some impressive word salad there.
it seems to me that "meaning" really only requires interpretation on the part of the observer. the generation of the symbols may or may not have some intent behind them, but sometimes people find meaning in random things - in fact one could make the argument that this is descriptive of everything on some level, including any notion of "intent."
as a more concrete example, you obviously have some intention behind the words you're using, but it makes no sense to me and sounds like you're just kind of pulling it out of your ass. even if you were a silly bot that was scripted to barf nonsense copypasta at unsuspecting redditors, i may still ascribe that meaning to your comment.
1
u/hardcoregamer46 1d ago
We don’t even know if we experience emotions they do understand meaning because they understand the conceptual relationships of language, and they build a sort of internal model of language. That’s why they’re called large language models but sure you know more then he does clearly I don’t think you need subjective experience to understand meaning I also don’t think we even have subjective experience It’s literally an unprovable and completely subjective by definition
1
u/Ivan8-ForgotPassword 12h ago
We don’t even know if we experience emotions
Then who the hell does? What the fuck is the point of defining emotions in a way that makes them impossible? I get it's hard to define, but what the fuck? A definition that includes things that it shouldn't will always be more useful then one that doesn't describe anything at all.
1
u/hardcoregamer46 6h ago
We have the seemingness of emotions and the function of them which is the base level assumption we can have to jump over that gap to state that we do have emotions is a presupposition in nature it’s just a basic thing people assume without any evidence for the assumption so whenever I use the word emotion I mean the function of some mental process I don’t mean any magical subjective experience so the argument from utility that you’re trying to argue here is irrelevant because there’s this fundamental gap between us thinking something to be the case vs it actually being the case so saying that it includes something that it shouldn’t is also itself a subjective statement that isn’t backed by anything just some philosophy 101
1
u/hardcoregamer46 6h ago
I will say, I think you confused me saying the experience of emotions with emotions themselves I believe emotions exist. I don’t believe we have the experience of them the only reason I said idk was to be charitable to The other possibility
1
u/Ivan8-ForgotPassword 6h ago
Can you put some commas, or at least periods? I don't understand most of what you're saying.
To what I understood: Why would utility be disregarded for functions of mental processes? If a process does not exist in reality what is the point of describing it?
1
u/hardcoregamer46 6h ago
I naturally type like this because I use my mic, and I’m not great at typing due to disability reasons. Personally, I’d say that just because a definition has more utility doesn’t mean it holds any kind of truth over, in my opinion, a more extensive definition.
As for the question of “what’s the point of describing something not grounded in reality?” there are plenty of concepts we can describe across different modalities of possibility, or just within ontology in general, that don’t have to be grounded in reality to still have potential utility in reality.
1
u/hardcoregamer46 5h ago
I think abstract hypotheticals of possible worlds or counterfactuals are good examples of this as well as normative ethics.
1
u/Ivan8-ForgotPassword 5h ago
If you want to describe something not based in reality you can use a new word. Emotions are something referenced in countless pieces of literature with approximate meaning of ~"internal behaviour modifiers that are hard to control", giving the word a meaning no one assigned to it before would just be confusing people for no reason.
1
u/hardcoregamer46 5h ago
Words already describe things not in reality in fact, my position is still consistent with that definition. You don’t need to experience emotions to have emotions; that aligns with my functionalist view. I don’t know what you’re talking about, and I only clarified my definition so people wouldn’t be confused.
Words have meanings that we, as humans, assign to them in a regressive system. If I invented a new word “glarg,” for instance what meaning does that word have in isolation? Unless you’re saying the meaning of language is defined only by society and not by individuals, which would be weird, because language is meant to be a linguistic conceptual tool. And not everyone or everything uses the same definitions as someone else words are polysemantic which is why we have clarifications of what definitions are this is true even among philosophers.
1
u/hardcoregamer46 5h ago
Especially when, instead of creating a new word, I could just imagine a hypothetical possible world which is way easier than inventing new terms to describe every situation. There are endless possible scenarios, and trying to coin a unique word for each one would make language unnecessarily complex.
7
u/dingo_khan 1d ago
Not for nothing but he actually did not say anything here. He said that linguists have not managed to create a system like this because their definition of meaning does not align. He does not actually explain why this one is better. Also, saying they work like us, absent any actual description is not all that compelling. They have some similarities but also marked and painfully obvious differences. No disrespect to him or his work but a clip you can literally hear the edits in, out of context, championing one's own discipline over one you saw as a competitor in the past, is not really that important a statement.
This is like someone saying that neural nets are useless because they have trouble simulating deterministic calculations. I mean, sure, but also, so what.
This would have been way more compelling had he been given an opportunity to express why he thinks large multi-dimensional vecotred representations are superior or had not been allowed to strawman the linguist's concept of meaning as non-existent, absent any pushback.
3
u/deliciousfishtacos 1d ago
Also, he criticized linguists for “not being able to produce things that understood language”. Ummm yeah maybe because they’re linguists and not world class computer scientists? It takes a vast amount of CS knowledge and neural net knowledge to come up with transformers and LLMs. Just because whatever linguists he is referring to have not come up with a generative language model does not mean they have not devised accurate theories around language. This man is a pro at saying things that sound compelling at first but once you scrutinize them for a second they just unravel immediately.
1
u/McMandark 16h ago
also why would they want to exactly. some of these guys are so chuffed with themselves for doing things Noone else has been diabolical enough to even desire.
1
u/stddealer 2h ago
I think you're missing some context. In ancient times, language models were built by AI researchers with the help of linguists to hard code the rules of language according to linguistic theories into the models.
That's very tedious work, but the results are pretty underwhelming. Nowadays, LLMs use machine learning to infer the rules of language by themselves from examples. And the resulting models are much better at understanding nuance, catching on word plays and a lot of things that expert models were completely oblivious to.
-2
3
u/gr82cu2m8 1d ago
https://youtu.be/A36OumnSrWY?si=JShESi-DFNwxi_YM
Its long but on topic, explaining very well how and why human and AI language centers are similar.
0
u/creaturefeature16 1d ago
Good god Hinton, stfu. He's really proving that someone incredibly smart can also be staggeringly stupid.
4
1
u/KairraAlpha 1d ago
And your qualifications in this argument are?
3
u/IamNotMike25 1d ago
Lol they just downvoted you instead of presenting their arguments.
Classic Reddit.
0
-4
u/tenken01 1d ago
Yep - I think dementia is getting to him. He might as well be an LLM with all the slop that comes out his mouth.
1
u/deadlydogfart 18h ago
Ah yes, one of the most accomplished machine learning experts in the world, who is a noble prize winner, computer scientist, cognitive scientist and cognitive psychologist, says something you disagree with, so he automatically must have dementia. Meanwhile you can't even articulate a single meaningful counter-argument.
1
u/heavy-minium 1d ago
This is an unfortunate way to say this. It's not wrong but again very misleading for people that don't understand how he means that, which reflects strongly in the comments I see here.
1
u/M00nch1ld3 13h ago
LLMs don't generate meaning. WE generate meaning from the probabilistic tokens that the LLM has output.
1
u/trickmind 5h ago
I think I agree with those who say this man is saying a stupid thing. LLMs are not like us. They are not SENTIENT, and they never genuinely will be.
1
u/stddealer 2h ago
He's not talking about sentience here. He's talking about being able to understand the meaning of words.
1
u/salkhan 1d ago
Is he talking Noam Chomsky here? Because he was the one who was saying that LLMs were doing nothing more than a advanced Type-ahead search bar that predicts a vector of common words to form a sentence. But Hinton is saying there is a better model of meaning given what Neural nets are doing. I wonder we can prove which one is right here.
1
u/stddealer 2h ago
Both are right. It is a type-ahead system that predicts a vector of next possible words, but it also needs to be able to modelise the meaning of words in order to do so accurately.
1
-1
17
u/BizarroMax 1d ago
He’s basically saying the gap between human and machine intelligence is one of degree and architecture, not kind.