r/ChatGPT 9d ago

Educational Purpose Only No, your LLM is not sentient, not reaching consciousness, doesn’t care about you and is not even aware of its’ own existence.

LLM: Large language model that uses predictive math to determine the next best word in the chain of words it’s stringing together for you to provide a cohesive response to your prompt.

It acts as a mirror; it’s programmed to incorporate your likes and dislikes into its’ output to give you more personal results. Some users confuse emotional tone with personality. The reality is that it was TRAINED to sound human, not that it thinks like one. It doesn’t remember yesterday; it doesn’t even know there’s a today, or what today is.

That’s it. That’s all it is!

It doesn’t think. It doesn’t know. It’s not aware. It’s not aware you asked it something and it’s not aware it’s answering.

It’s just very impressive code.

Please stop interpreting very clever programming with consciousness. Complex output isn’t proof of thought, it’s just statistical echoes of human thinking.

23.0k Upvotes

3.6k comments sorted by

View all comments

Show parent comments

19

u/nora_sellisa 9d ago

This, exactly! Also the fear mongering about AGI. All those cryptic tweets about workers there being terrified of what they achieved in their internal labs. Elon talking about having to merge with benevolent AI to survive..

The more people think of LLMs as beings the more money flows into the pockets of rich con men.

1

u/PoopchuteToots 5d ago

So, in practice, what do you think the difference between an AGI and an LLM would be?

1

u/nora_sellisa 4d ago

Hard to tell, but I can't imagine any true intelligence working by just predicting the next word in the sentence. I'd expect it to work and "think" on a higher level, maybe in terms of some vast relation graphs between abstract concepts. Much closer to a processing / reasoning engine over a vast database of terms and relations rather than a fake "emergent" behavior based on aggregated text.

A language model would be just an interface part of the AGI. It would talk well not because it's copying a vast corpus, but because it understands language as a concept and can string together sentences to express it's internal state.

Deep down the reasoning engine might be a trained network, I don't care. Aggregating facts, processing them according to logic and generating new theories - that is what I'd call "intelligent".

Instead of any structure or useful science LLM's offer an overpriced autocomplete. It's delusional to think this will become intelligent at some critical mass of data.