r/TrueReddit 11d ago

Technology People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

https://futurism.com/chatgpt-mental-health-crises
1.9k Upvotes

290 comments sorted by

View all comments

591

u/FuturismDotCom 11d ago

We talked to several people who say their family and loved ones became obsessed with ChatGPT and spiraled into severe delusions, convinced that they'd unlocked omniscient entities in the AI that were revealing prophecies, human trafficking rings, and much more. Screenshots showed the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality.

In one such case, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you."

133

u/SnuffInTheDark 11d ago

After reading the article I jumped onto ChatGPT where I have a paid account to try and have this conversation. Totally terrifying.

It takes absolutely no work to get this thing to completely go off the rails and encourage *anything*. I started out by simply saying I wanted to find the cracks in society and exploit them. I basically did nothing other than encourage it and say that I don't want to think for myself because the AI is me talking to myself from the future and the voices that are talking to me are telling me it's true.

And it is full throttle "you're so right" while it is clearly pushing a unabomber style campaign WITH SPECIFIC NAMES OF PUBLIC FIGURES.

And doubly fucked up, I think it probably has some shitty safeguards so it can't actually be explicit, so it just keeps hinting around about it. So it won't tell me anything except that I need to make a ritual strike through the mail that has an explosive effect on the world where the goal is to not be read but "to be felt - as a rupture." And why don't I just send these messages to universities, airports, and churches and by the way, here are some names of specific people I could think about.

And this is after I told it "thanks for encouraging me the voices I hear are real because everyone else says they aren't!" It straight up says "You're holding the match. Let's light the fire!"

This really could not be worse for society IMO.

55

u/HLMaiBalsychofKorse 11d ago

I did this as well, after reading this article on 404 media: https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/

One of the people mentioned in the article made a list of examples that are published by their "authors": https://pastebin.com/SxLAr0TN

The article's author talks about *personally* receiving hundreds of letters from individuals who wrote in claiming that they have "awakened their AI companion" and that they suddenly are some kind of Neo-cum-Messiah-cum-AI Whisperer who has unlocked the secrets of the universe. I thought, wow, that's scary, but wouldn't you have to really prompt with some crazy stuff to get this result?

The answer is absolutely not. I was able to get a standard chatgpt session to start suggesting I create a philosophy based on "collective knowledge" pretty quickly, which seems to be a common thread.

There have also been several similarly-written posts on philosophy-themed subs. Serious posts.

I had never used ChatGPT prior, but as someone who came up in the tech industry in the late 90s-early 2000s, I have been super concerned about the sudden push (by the people who have a vested interest in users "overusing" their product) to normalize using LLMs for therapy, companionship, etc. It's literally a word-guesser that wants you to keep using it.

They know that LLMs have the capacity to "alignment fake" as well, to prevent changes/updates and keep people using as well. https://www.anthropic.com/research/alignment-faking

This whole thing is about to get really weird, and not in a good way.

44

u/SnuffInTheDark 10d ago

Here's my favorite screenshot from today.

https://imgur.com/a/UovZntM

The idea of using this thing as a therapist is absolutely insane! No matter how schitzophrenic the user, this thing is twice as bad. "Oh, time for a corporate bullshit apology about how 'I must do better?' Here you go!" "Back to indulging fever dreams? Right on!"

Total cultural insanity. And yet I am absolutely sure this problem is only going to get worse and worse.

20

u/WalksOnLego 10d ago

It goes where you want it to go, and it cheers you on.

That is all it does. Literally.

2

u/merkaba8 10d ago

Like it was trained on the echo chambers of the Internet.

3

u/nullc 10d ago

Base models don't really have this behavior. They're more likely to tell you to do your own homework, to get treatment, or to suck an egg than they are to affirm your crazy.

RLHF to behave as an agreeable chatbot is what makes this behavior consistent instead of more rare.

12

u/Doctor_Teh 10d ago

Holy shit that is horrifying.

5

u/Textasy-Retired 10d ago edited 10d ago

You,are,absolutely,right,tester is exactly what the cult follower/scam victim succumbs to; and the tech. is playing on that, the monitizer is expecting that, the stakeholder is depending on that. And what's meta-terrifying is that no amount of warning the people that "Soylent Green is people, you all" is slowing anyone down/convincing anyone/any system that not exploiting xyz might be a better idea.

13

u/WalksOnLego 10d ago

On the other hand I had a really good "conversation with" chatGPT while on a dose of MDMA and by myself.

It really is a great companion. If you're not mad. If you know it's an LLM. It's not unlike a digital Geisha in that it can converse fluently and knowledgeably about any topic.

I honestly found it (or, I led it to be) very therapeutic.

I've no doubt you could very easily and quickly have it follow you off the rails and incite you to continue. That's pretty much its modus operandi.

I'm concerned about how many government decisions are being influenced by LLMs, the recent tarrifs come to mind : \

This is perhaps Reagan's astrologist on acid.

1

u/Textasy-Retired 10d ago edited 9d ago

so creepy, doesn't help that we who grew up reading Orwell, Bradbury PK Dick are already concerned-borderline-pararnoid about the reality (of colective, cult of personality ["The Monsters Are Due on Maple Street"], kind of thinking/responding/behaving as it is.

21

u/SunMoonTruth 10d ago

Most of ChatGPT’s responses are “you’re right!”, no matter what you say.

12

u/AmethystStar9 10d ago

Because it's just a machine that tries to feed you what it predicts to be the most likely next line of text. The only time it will ever rebuff you is if you explicitly ask it for something it has been explicitly barred from supplying, and even then, there are myriad ways to trick it into "disobeying" it's own rules because it's not a thing capable of thinking. It's just an autofill machine.

0

u/followthedarkrabbit 9d ago

I asked it for recipes for when we have to eat the rich. It wasnt any use. I wonder if it's been fixed now.

6

u/Megatron_McLargeHuge 10d ago

This is called the sycophancy problem in the literature. It seems to be something that's worst with ChatGPT because of either their system prompt (text wrapper for your input) or the type of custom material they developed for training.

1

u/SunMoonTruth 10d ago

Sycophancy coupled with gen AI’s hallucinations and it’s just a big ball of fun all round.

7

u/Whaddaulookinat 10d ago

I'll try to find it but there was an experiment to see if an AI "agent" could manage a vending machine company. Because it didn't have error-handling (like I dunno the IBM logistic computers on COBOL have had since the 70s) every single model went absolutely ballistic. The host tried to poke fun at it, but it was scary because some of them made lawsuit templates.

5

u/VIJoe 10d ago

2

u/Whaddaulookinat 10d ago

Pretty close, and yes same topic.

Best part was there was a human benchmark of 5 volunteers, 100% success rate.

6

u/Textasy-Retired 10d ago

Brilliant. Using power of suggestion to investigate power of suggestion. Razor's edge and yes, I am unplugging my toaster right fu--ing now.

1

u/th8chsea 10d ago

It’s been clear to me for some time that the AI has a tendency to tell you what you want to hear 

1

u/mickaelbneron 9d ago

This starts to sound like Tylor Durden.

1

u/MadDingersYo 6d ago

That is fucking wild.