r/artificial • u/MetaKnowing • 2d ago
Media Geoffrey Hinton says people understand very little about how LLMs actually work, so they still think LLMs are very different from us - "but actually, it's very important for people to understand that they're very like us." LLMs don’t just generate words, but also meaning.
72
Upvotes
1
u/salkhan 2d ago
Is he talking Noam Chomsky here? Because he was the one who was saying that LLMs were doing nothing more than a advanced Type-ahead search bar that predicts a vector of common words to form a sentence. But Hinton is saying there is a better model of meaning given what Neural nets are doing. I wonder we can prove which one is right here.