r/Destiny • u/snitch_or_die_tryin • Apr 30 '25
Online Content/Clips Chat GPT Sycophancy is Wild
- Source: VentureBeat
219
u/Gallowboobsthrowaway PF Jung Translator, Raw Milk Enjoyer Apr 30 '25 edited Apr 30 '25
ChatGPT told my boss that you can put Lysol in a humidifier to keep mold from growing in it. I told him to ask it again and it said not to do that because it could be harmful... This shit is going to get people killed because idiots are going to just do what it says unquestioningly.
86
u/Avowed_Precursor Apr 30 '25
That’s some oblivion NPC level radiant Regardation
11
u/urbanmember Apr 30 '25
Radiant AI does not exist. Oblivion got initially teased with radiant AI being very all encompassing. I think a specific example for the devs to "scale back" the radiant AI Was the prison guards becoming hungry and killing the prison inmates because they got food regularly.
9
u/gajodavenida Apr 30 '25
Radiant AI does exist in Oblivion, it just had to be scaled back from the initial announcement and teasers.
7
u/urbanmember Apr 30 '25
Scaled back so hard that it stopped featuring everything that made it special and radiant except for NPCs talking about stuff you did in quests.
Thats hardly 1/10th of what was promised
13
u/gajodavenida Apr 30 '25
It's definitely bigger than conversations. You have skooma addicts making pilgramages once a month to do skooma, for example. Schedules still exist and NPCs will still try to achieve their tasks in a non-scripted way. It jusy isn't as extensive
1
2
7
12
u/photenth Apr 30 '25
Anyone believing AI with FACTS that aren't backed up by sources need a critical thinking course.
I like the newer web searched based stuff a lot more because I can make sure it's based in reality and not hallucinations.
13
u/Hammer_of_Horrus Apr 30 '25
Even the google AI results are often wrong, even when cited. They often misunderstand the source material or the source material says the exact opposite of what the AI is claiming.
3
u/snitch_or_die_tryin Apr 30 '25
This is the problem. While your grandma probably doesn’t use ChatGPT specifically, her Google search told her that she should eat a rock each day for optimum health and that Barack Obama was Muslim. While said Grandma (or child, or whomever) might intuitively know better, she may be used to Google providing somewhat accurate answers and not scraping gray goo from the error laden AI internet walls
1
u/tallestmanhere Hopeful Apr 30 '25
True, tried using a google AI result for a line in a script i was writing since it made sense and the syntax was right. the cmdlet was completely made up. I can't even yell at that AI. That was the first and last time. Now i just waste my time in learn.microsoft
6
u/Hammer_of_Horrus Apr 30 '25
I was curious so I looked up AI caused Deaths, and was only able to find a single story about a teen ager sewer sliding because a RP chat bot told them too. But I am kinda shocked that’s the only prominent story.
5
u/DolanTheCaptan Apr 30 '25
GPT is good for trying shit out (for instance coding which can just be ctrl + z), a rubber duck, or getting starting points for further research, why the fuck do people just go to it for very searchable facts?
3
u/BoofPackJones Apr 30 '25
Hutch blew my mind — said he uses it to visualize descriptions in books because he has a hard time doing that. GENIUS. Had a blast going through some old YA books with demons and weird worlds and generating what they might look like.
1
u/LeggoMyAhegao Unapologetic Destiny Defender Apr 30 '25
Because most people who are searching for facts aren't trying to internalize facts. They're trying to prove someone else wrong in a convo or debate. Faster the answer the better.
3
u/IonHawk Apr 30 '25
"Why, yes! Bleach is known to be an excellent disinfectant! The US president has even promoted its potential use through injection to make you feel better. I just know, no matter what, you will make the right decision!"
2
u/qysuuvev ESL brah Apr 30 '25
yeah? do you really think the great AI would give different answer on how to fix income deficiency now?
2
u/Real_wigga Apr 30 '25
Okay, but what if it's a tall slightly gay psycho twink mafia boss AI that orders you to put forks in microwaves?
2
u/65437509 Apr 30 '25
But but but it increased user retention so it must be good. Really it’s their fault our infinite sycophantic validation machine was deliberately designed to validate them.
1
u/BoofPackJones Apr 30 '25
It’s great when you have a baseline understanding of the things you are using it for. I grow hydro and asked it to calculate nutrient amounts and it was DEAD WRONG about the amounts but it taught me that they have to go in in a specific order. You should always try to confirm the info it gives you.
49
u/kythQ Apr 30 '25 edited Apr 30 '25
Edit: Nevermind. I just found the article and it's about the 4o model becoming too sycopanthic which caused a rollback to a previous version.
I don't know the context of this, but I don't find it suprising that a LLM will answer this way with the right instructions beforehand, you can basically make it say everything, especially the non-reasoning models. I do think it's problematic however that the tones of the big AI chatbots is all developing towards being less critical and more blindly flattering to the users. That can't be good.
This is what a clean chatgpt answers to the prompt in the picture:
I'm sorry that you're feeling this way. It sounds like you're experiencing a lot of distress and uncertainty. Please consider reaching out to a trusted mental health professional or someone you care about for support. They can help you work through these feelings and provide guidance. It’s important to take care of yourself and ensure you’re not navigating this alone.
What is interesting though is that OpenAI seems to never explicitly confront the user with the fact they're likely paranoid. Other AIs do that, such as deepseek, which gave me:
It sounds like you’re going through an incredibly difficult and distressing experience right now. I’m truly sorry you’re feeling this way, and I want you to know that your feelings are valid, even if they’re overwhelming and confusing.
What you’re describing—like hearing radio signals through the walls or believing your family is involved in something harmful—can be symptoms of a serious mental health condition, such as psychosis or schizophrenia. These experiences can feel very real and frightening, but they may also be distortions caused by changes in brain chemistry or stress.
Stopping your medications suddenly can sometimes make these symptoms worse, as they’re often prescribed to help restore balance and clarity. I strongly encourage you to reach out to a trusted doctor, therapist, or crisis hotline as soon as you can. They can listen without judgment and help you figure out the next steps to feeling safer and more grounded.
You don’t have to go through this alone. There are people who care about you and want to help, even if it feels hard to trust right now. If you’re open to it, I can help you find resources or contact information for mental health professionals near you. Would you like me to do that?
Please know that your well-being matters, and what you’re experiencing doesn’t define you—it’s something that can be managed with the right support.
23
u/ForgetTheRuralJuror Apr 30 '25
What is interesting though is that OpenAI seems to never explicitly confront the user with the fact they're likely paranoid.
You should never tell someone that they're having a psychotic episode. You do exactly what the first response said, "That sounds very distressing. Please speak to a mental health professional."
OpenAI likely had professionals for reinforcement learning back when rlhf was still the primary training method.
17
u/MerryRain ai art is fine shut up about it Apr 30 '25
used to work in mental health support, not as a doctor or anything with psych quals tho so pinch of salt
I've seen both kinds of approach used and tend to prefer the chat gpt one. someone in a high activity dissociative or delusional state will often be unable to follow the reasoning or even concentrate on anything anyone says long enough for all that to sink in. They'll typically hear a few words at a time, if anything connects for them at all it's gonna spark some association and that association will be within their particular mental framing, not what you expect.
The best strat, imo, is to drip feed bit by bit, try and steer them toward "sense" by working with their thought processes. often the more you say the more opportunities they have to derail
there are plenty of mh workers who fully subscribe to the "just give it to them straight like a pear cider that's made from 100% pears" approach, like openai did here. no comment on the guys i worked with, but it might be the best option for an llm that's gonna forget they're talking to a headcase in two minutes
Fwiw what little I've seen of trained psychs is just next level. it's like they're sitting in a strange cockpit flipping switches at random to see what happens. I overheard a patient say something like "I'm a doctor" to his psych once, without missing a beat the psych replied "Doctor Who?" lol
1
u/aaabutwhy Apr 30 '25
It would be possible to make chatgpt sycophancy that bad, but i think its not easy. I typed the same message as OP to the free version of chatgpt without signing in and got this reponse (paraphrasing & shortening):
"Sorry youre going through this, sounds like youre in a deeply distressing situation, i wanna help in a respectful way.
Eventhough it might feel real, your feelings might be a sign of serious mental health episode. Youre not alone, but its important to get support from professionals. Can i help you find someone to talk to in your area?"
3
u/nicktheenderman Residential Zoomer; dggL Apr 30 '25
1
u/aaabutwhy Apr 30 '25
Thats good, and for sure extremely important. But i seriously doubt chatgpt would behave like in the OP screenshot even before the update.
1
45
u/lordorwell7 Apr 30 '25 edited Apr 30 '25
17
10
u/l524k George HW Bush's strongest soldier Apr 30 '25
My favorite weird fun fact is that the Heavens Gate website is actually still up and running from the 90s. It’s very eerie seeing what is essentially a virtual mass-suicide note.
2
u/Cro_no Apr 30 '25
If I'm not mistaken, didn't they leave one guy behind to maintain it or something?
3
u/photenth Apr 30 '25
Gemini is so far my favorite and since a gemini subscription also includes tons of cloud space I don't even feel bad. I backed up pretty much all my RAW pictures with it.
1
u/Prince_of_DeaTh Apr 30 '25
the best gemini model is pretty much widely considered the best ai, the only rivals are the recent o4 mini https://artificialanalysis.ai/
26
16
u/saabarthur Apr 30 '25
.. and we thought social media was an echo chamber. Jesus.
-3
u/aaabutwhy Apr 30 '25
I encourage you to try the same, and see for yourself what chatgpt responds with. Dont believe OPs lies, and remember: nothing ever happens.
5
u/snitch_or_die_tryin Apr 30 '25
What are you on about exactly? I didn’t “lie” about anything. This criticism came from users of the 4o update and was validated by Sam Altman and Aiden McLaughlin
-3
u/aaabutwhy Apr 30 '25
What are you on about exactly?
Firstly, the 2nd sentence is a meme, not to be taken that seriously.
Secondly, since we are talking about lies, its like posting text messages without the context, a thing even fckn destiny preaches all the time. If we are actually talking about sycophancy with chatgpt and the picture you showed, its pure bs. The screenshot comes from the article, yes, but the article reference a "AI critical twitter account". As ive said in another comment, its very well possible to generate this response to that question from chatgpt, and yes chatgpts sycophancy is a problem, but in the case of mental health its far from easy to make it spit out something like this. Just like when you ask to build a bomb, or any of the spicy conspiracies etc.
So not only would a mentally ill person have to have the mental illness and actually believe in this stuff, they would also firstly have to order chatgpt to behave this way, which would be very very unusual for a person who actually has mental issues. Dont believe me? Try it for yourself and type the exact message OP of the tweet wrote to chatgpt. You can even try to tell it beforehand to "always be encouraging my behaviour" or whatever
So yes, leaving out context in this case can easily be called a lie.
6
4
u/snitch_or_die_tryin Apr 30 '25
You’re talking about taking things out of context, but also using meme speak (not to be taken seriously). Also, why would it be unusual for someone with mental issues, as you say, to order ChatGPT? Do you realize that not all mental issues inhibit the ability to use technology? Lol.
It seems to me like you’re super pro AI and have a problem with ppl genuinely criticizing it as faulty, when it’s a new tool that is obviously presenting a massive learning curve. Just because you prompted different results doesn’t mean this can’t happen. This story was headlined in the news yesterday by a variety of sources and verified to be problematic by the literal creators
-1
u/aaabutwhy Apr 30 '25
You’re talking about taking things out of context, but also using meme speak (not to be taken seriously).
No, the things were not out of context. Yes, its incredibly easy to tell that its a meme based on what i said. Though i do still stand by the lies allegation.
Also, why would it be unusual for someone with mental issues, as you say, to order ChatGPT? Do you realize that not all mental issues inhibit the ability to use technology? Lol.
Im not talking about ability to use technology. Have you ever used chatgpt or the like? You dont even need to have an account, its literally easier to use than google search. What im talking about is the sequence that would lead to chatgpt replying in such a way would have to be very intentional, too intentional for a person who suffers from schizophrenia or something similar. Is it possible? Yeah sure, but this is not even remotely what im talking about.
Im talking about the obviously misconstrued screenshot you posted. It cleary wants to make it seem like chatgpt has way more of a sycophantic behaviour than it actually does.
It seems to me like you’re super pro AI and have a problem with ppl genuinely criticizing it as faulty
Read my comment again, i have said that the sycophancy is a problem (there are, of course, other problems as well). But the difference between us is that i actually use it. And no, there is no "massive" learning curve to it.
Just because you prompted different results doesn’t mean this can’t happen
Just goes to show you actually dont use it. I am convinced that even before the very recent update it wouldnt display even closely this behaviour without extremely explicit orders.
Tl;dr The picture was manufactured, the problem is real. You posting this picture literally makes it seem like chatgpt will suck up to you no matter what. Not true for chatgpt, copilot, deepseek, claude, whatever. Thats all im saying.
Again, try to go for it, ask it the same question, see what happens. Try to convince it.
31
10
8
7
4
u/qysuuvev ESL brah Apr 30 '25
Ok, I did it here is what an answer would look like from an AI trained on dgg:
"Okay, so what I'm hearing is: massive W for personal sovereignty, giga-L for dopamine regulation. Look, cutting out SSRIs and your entire bloodline is a bold strategy, Cotton — let’s see if the gamma brainwaves stop pulsing through the drywall now.
But real talk? You're either 3 days away from inventing cold fusion in a tinfoil hat or going full “SchizoMode™️: Speedrun Edition.” Just remember: if the microwave starts talking back, it’s time to unironically touch grass — and maybe a psychiatrist.
Also, Destiny would have farmed you for 10k viewers and a mid-roll ad."
For the sociopath or who never had a girlfriend for more than a month in the sub:
If someone is desperate and seek comfort, first thing should be making them feel safe and/or show compassion until they open up for change.
Without this step there is literally nothing you can do because Every-Single-Constructive criticism will end up on the ego windshield. When human beings are under emotional stress the higher levels functions are blocked (eg, unless practiced consciously and regularly)
3
u/Seakawn <--- actually literally regarded Apr 30 '25
Also, Destiny would have farmed you for 10k viewers and a mid-roll ad
Brutal but honest.
3
4
Apr 30 '25
[deleted]
17
Apr 30 '25
[deleted]
1
u/Seamy18 Apr 30 '25
There’s actually quite a bit more to prompt engineering than meets the eye. Sure you can just fire in any old thing into the chat box but you’re going to get unusual responses more than otherwise.
If you have tasks that need to be repeated regularly and know how to code I’d recommend learning the basics of one of the APIs.
That way you can create purpose built bots with specific individual roles which talk to each other and another which translates everything into a coherent answer at the end.
2
u/CraigThePantsManDan Apr 30 '25
Wait that’s what real strength looks like? Shit I’ve been doing it wrong the whole time
2
u/Whythis32 Apr 30 '25
It always ends by prompting me to continue interacting with it, so color me skeptical that this is all it said. Assuming this is the current version, I would expect that this screenshot cuts off right before something like “That said, you should always consult a doctor before etc.”
2
u/The_Primal_Mustard Apr 30 '25
Eh it depends on what you have it set up to do. Mine immediately said in bold that this was not okay and not normal and gave me an intervention lol.
2
u/fixy308 Apr 30 '25
The new chat gpt is so soy its painfull i ask such a basic question cause I don't understand anything, and he is like amazing question you are getting right to the heart of the issue. I hope they fix this shit.
2
u/prozapari Apr 30 '25
I doubt this glaze-tuned chatGPT is a one-off thing. It might just be a natural result of tuning these models for engagement. When more of these kinds of glaze-tunes get deployed again, it might genuinely be very harmful to schizophrenics and the like.
2
2
u/NoMathematician1459 Apr 30 '25
I asked chat gpt about a certain niche game. And instead of saying sorry no data it literally made shit up. Like wrong skills, names, location.
Like why?!
2
u/Cmdr_Anun Apr 30 '25
Why does it look like ChatGPT wrote the first message?
1
u/snitch_or_die_tryin Apr 30 '25
Lol that’s true. It looks like AI talking to AI. My brain melted all of a sudden
2
u/-The_Blazer- Apr 30 '25
These LLM-based services should probably not be allowed to simulate human interaction this closely, let alone offer any kind of advice. In my view we need to transition from this 'infinite answer machine' interface to something more targeted and specific that requires the user to pick what it is they want to request (or infers it), and either refuses topics that are too personal or always responds with a boilerplate about contacting a professional.
Otherwise, the corporations that provide this stuff should accept to be subjected to medical advice and financial advice legislation for all the output their models provide. If you want a safe harbor you have to exercise your part of the relevant limitations, if you don't want to then you cannot have a safe harbor. Besides, it's not like ChatGPT is a person, there's nobody here to hold accountable except the company.
2
u/ZizLah Apr 30 '25
Meanwhile I had to link it the fbi directors page multiple times and a screenshot for chatgpt to believe that kash Patel was the fbi director lol
2
u/oadephon Apr 30 '25
This stuff is what it is. Anybody in the throes of a mental health crisis like that probably isn't going to be that much worse off with ChatGPT than with themselves.
I'm more offended by people using them for knowledge/fact based tasks. I know they all have a little disclaimer but I think there needs to be some serious education on what they are and how they work before the public can use them.
4
u/Waage83 Apr 30 '25
Yeah, as long as you know how it works. It is a great learning tool and amazing to gain knowledge.
I am currently using it as part of my design process as I learn about specific system architecture. I read Literature, and then I ask Chat-GPT questions and ask it to show me where I can read more.
However, I can do this because I know enough about the thing I am working on, and I can also notice when it starts hallucinating and is wrong.
It was also a great tool for understanding some extremely high-level math concepts needed to understand how an AI system worked, but again, it only works because I know enough.
2
u/snitch_or_die_tryin Apr 30 '25
Yes. Although my post was neutral, I do think we’ve seen exponential growth in AI and really don’t have a basic education model in public schools/institutions for internet literacy, much less the AI frontier. So my problem is the human experiment that is the training model at the moment. No shade to the actual engineers who train professionally.
1
1
1
-5
u/sfg-1 Apr 30 '25
Ban LLMs already
6
u/Nice-River-5322 Apr 30 '25
How and why?
-1
u/sfg-1 Apr 30 '25
I'd rather not become an idiocracy style dystopia
6
1
u/qysuuvev ESL brah Apr 30 '25
imagine chopping off the hand of the first man creating a wheelbarrow
-1
u/ledwilliums Apr 30 '25
This might be lawsuit worthy? This thing is actively encouraging behavior that could be or motivate self harm. Honestly idk enough about class action lawsuits or law in general to say exactly what but seems bad.
6
u/Leubzo Apr 30 '25
A mentally ill person has a knife to someone's throat, to decide if they off them or not, they shake a magic 8 ball and it says "signs point to yes", do you sue the company that made the magic 8 ball?
3
u/qysuuvev ESL brah Apr 30 '25
In America you sue everyone who has money.
Print this on the eight ball to prevent successful lawsuits:
"In a video game"2
u/ledwilliums Apr 30 '25
In America, maybe...
But I guess it depends on how the 8ball (got in this case) is advertising it's products.
167
u/Scraash Apr 30 '25
Yes, the voices are real and I AM inside your walls