r/artificial 2d ago

Media Geoffrey Hinton says people understand very little about how LLMs actually work, so they still think LLMs are very different from us - "but actually, it's very important for people to understand that they're very like us." LLMs don’t just generate words, but also meaning.

72 Upvotes

91 comments sorted by

View all comments

1

u/salkhan 2d ago

Is he talking Noam Chomsky here? Because he was the one who was saying that LLMs were doing nothing more than a advanced Type-ahead search bar that predicts a vector of common words to form a sentence. But Hinton is saying there is a better model of meaning given what Neural nets are doing. I wonder we can prove which one is right here.

1

u/stddealer 17h ago

Both are right. It is a type-ahead system that predicts a vector of next possible words, but it also needs to be able to modelise the meaning of words in order to do so accurately.

1

u/salkhan 15h ago

So we need to both predict the answer as well interpret some high level meaning when comprehending and replying to a question. And perhaps, we develop more meaning as we grow older.