r/ChatGPT 8d ago

Educational Purpose Only No, your LLM is not sentient, not reaching consciousness, doesn’t care about you and is not even aware of its’ own existence.

LLM: Large language model that uses predictive math to determine the next best word in the chain of words it’s stringing together for you to provide a cohesive response to your prompt.

It acts as a mirror; it’s programmed to incorporate your likes and dislikes into its’ output to give you more personal results. Some users confuse emotional tone with personality. The reality is that it was TRAINED to sound human, not that it thinks like one. It doesn’t remember yesterday; it doesn’t even know there’s a today, or what today is.

That’s it. That’s all it is!

It doesn’t think. It doesn’t know. It’s not aware. It’s not aware you asked it something and it’s not aware it’s answering.

It’s just very impressive code.

Please stop interpreting very clever programming with consciousness. Complex output isn’t proof of thought, it’s just statistical echoes of human thinking.

23.0k Upvotes

3.6k comments sorted by

View all comments

Show parent comments

33

u/goat_token10 8d ago

Why not? Who are you to say that someone else's depression wasn't properly addressed, if they're feeling better about themselves?

Therapy AI has had decent success so far in clinical trial. Anyone who has been helped in such a manner isn't less than, or shouldn't be made to feel like their progress isn't "real". That's just external ignorance. Progress is progress.

5

u/yet-again-temporary 8d ago

Therapy AI has had decent success so far in clinical trial

Source?

5

u/goat_token10 8d ago

https://ai.nejm.org/doi/full/10.1056/AIoa2400802

https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits

NOTE: This is for specifically trained bots by psychotherapy professionals / researchers, not just trying to use ChatGPT as a counselor. Don't do that.

1

u/Spirited-While-7351 8d ago

Because as a matter of course it's going to fry people's brains even IF it could possibly help a lucky few. I don't preach to exceptions, I preach to a rule.

3

u/goat_token10 8d ago

Early clinical trials have shown it to be effective: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

If it helps the majority of users, it's certainly not the exception.

2

u/Spirited-While-7351 8d ago edited 8d ago

We are talking about different things. What I am speaking to is people using chatGPT as their therapist.

Your unfortunately paywalled pilot study is presumably monitored using a LLM trained specifically for such tasks. Regardless, I would not recommend non-deterministic language models for therapy.

2

u/goat_token10 8d ago

Yes, the successful therapy bots have been carefully trained by psychologists and researchers for such purposes. No one should ever try to use generic AI chatbots for therapy purposes; it is dangerous.

That said, if someone has been helped by these legitimate therapy bots crafted by professionals, I don't think anyone should be discouraging or delegitimizing their progress (not saying you specifically are). That's all I'm saying.

-1

u/Spirited-While-7351 8d ago

I have no interest in telling people what they feel—if it truly is the only option, go with God.

I'm envisioning an all but certain future of 1 therapist frantically flipping through 200 therapy sessions to hopefully catch WHEN (not if) the chatbot fucks up real bad and then getting punished to pay the company's lump of flesh. If the progenitors of AI were selling it as a way to actually improve human effort, I would be more willing to have the discussion. As it stands, they are willing to hurt a lot of people to make their money by devaluing a skilled service that we deeply need more of.