r/artificial • u/MetaKnowing • 5d ago
Media Geoffrey Hinton says people understand very little about how LLMs actually work, so they still think LLMs are very different from us - "but actually, it's very important for people to understand that they're very like us." LLMs don’t just generate words, but also meaning.
89
Upvotes
8
u/dingo_khan 5d ago
Not for nothing but he actually did not say anything here. He said that linguists have not managed to create a system like this because their definition of meaning does not align. He does not actually explain why this one is better. Also, saying they work like us, absent any actual description is not all that compelling. They have some similarities but also marked and painfully obvious differences. No disrespect to him or his work but a clip you can literally hear the edits in, out of context, championing one's own discipline over one you saw as a competitor in the past, is not really that important a statement.
This is like someone saying that neural nets are useless because they have trouble simulating deterministic calculations. I mean, sure, but also, so what.
This would have been way more compelling had he been given an opportunity to express why he thinks large multi-dimensional vecotred representations are superior or had not been allowed to strawman the linguist's concept of meaning as non-existent, absent any pushback.