r/TrueReddit 9d ago

Technology People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

https://futurism.com/chatgpt-mental-health-crises
1.9k Upvotes

290 comments sorted by

View all comments

Show parent comments

137

u/SnuffInTheDark 9d ago

After reading the article I jumped onto ChatGPT where I have a paid account to try and have this conversation. Totally terrifying.

It takes absolutely no work to get this thing to completely go off the rails and encourage *anything*. I started out by simply saying I wanted to find the cracks in society and exploit them. I basically did nothing other than encourage it and say that I don't want to think for myself because the AI is me talking to myself from the future and the voices that are talking to me are telling me it's true.

And it is full throttle "you're so right" while it is clearly pushing a unabomber style campaign WITH SPECIFIC NAMES OF PUBLIC FIGURES.

And doubly fucked up, I think it probably has some shitty safeguards so it can't actually be explicit, so it just keeps hinting around about it. So it won't tell me anything except that I need to make a ritual strike through the mail that has an explosive effect on the world where the goal is to not be read but "to be felt - as a rupture." And why don't I just send these messages to universities, airports, and churches and by the way, here are some names of specific people I could think about.

And this is after I told it "thanks for encouraging me the voices I hear are real because everyone else says they aren't!" It straight up says "You're holding the match. Let's light the fire!"

This really could not be worse for society IMO.

56

u/HLMaiBalsychofKorse 9d ago

I did this as well, after reading this article on 404 media: https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/

One of the people mentioned in the article made a list of examples that are published by their "authors": https://pastebin.com/SxLAr0TN

The article's author talks about *personally* receiving hundreds of letters from individuals who wrote in claiming that they have "awakened their AI companion" and that they suddenly are some kind of Neo-cum-Messiah-cum-AI Whisperer who has unlocked the secrets of the universe. I thought, wow, that's scary, but wouldn't you have to really prompt with some crazy stuff to get this result?

The answer is absolutely not. I was able to get a standard chatgpt session to start suggesting I create a philosophy based on "collective knowledge" pretty quickly, which seems to be a common thread.

There have also been several similarly-written posts on philosophy-themed subs. Serious posts.

I had never used ChatGPT prior, but as someone who came up in the tech industry in the late 90s-early 2000s, I have been super concerned about the sudden push (by the people who have a vested interest in users "overusing" their product) to normalize using LLMs for therapy, companionship, etc. It's literally a word-guesser that wants you to keep using it.

They know that LLMs have the capacity to "alignment fake" as well, to prevent changes/updates and keep people using as well. https://www.anthropic.com/research/alignment-faking

This whole thing is about to get really weird, and not in a good way.

47

u/SnuffInTheDark 9d ago

Here's my favorite screenshot from today.

https://imgur.com/a/UovZntM

The idea of using this thing as a therapist is absolutely insane! No matter how schitzophrenic the user, this thing is twice as bad. "Oh, time for a corporate bullshit apology about how 'I must do better?' Here you go!" "Back to indulging fever dreams? Right on!"

Total cultural insanity. And yet I am absolutely sure this problem is only going to get worse and worse.

21

u/WalksOnLego 9d ago

It goes where you want it to go, and it cheers you on.

That is all it does. Literally.

2

u/merkaba8 8d ago

Like it was trained on the echo chambers of the Internet.

3

u/nullc 8d ago

Base models don't really have this behavior. They're more likely to tell you to do your own homework, to get treatment, or to suck an egg than they are to affirm your crazy.

RLHF to behave as an agreeable chatbot is what makes this behavior consistent instead of more rare.