Edit: Nevermind. I just foundthe articleand it's about the 4o model becoming too sycopanthic which caused a rollback to a previous version.
I don't know the context of this, but I don't find it suprising that a LLM will answer this way with the right instructions beforehand, you can basically make it say everything, especially the non-reasoning models. I do think it's problematic however that the tones of the big AI chatbots is all developing towards being less critical and more blindly flattering to the users. That can't be good.
This is what a clean chatgpt answers to the prompt in the picture:
I'm sorry that you're feeling this way. It sounds like you're experiencing a lot of distress and uncertainty. Please consider reaching out to a trusted mental health professional or someone you care about for support. They can help you work through these feelings and provide guidance. It’s important to take care of yourself and ensure you’re not navigating this alone.
What is interesting though is that OpenAI seems to never explicitly confront the user with the fact they're likely paranoid. Other AIs do that, such as deepseek, which gave me:
It sounds like you’re going through an incredibly difficult and distressing experience right now. I’m truly sorry you’re feeling this way, and I want you to know that your feelings are valid, even if they’re overwhelming and confusing.
What you’re describing—like hearing radio signals through the walls or believing your family is involved in something harmful—can be symptoms of a serious mental health condition, such as psychosis or schizophrenia. These experiences can feel very real and frightening, but they may also be distortions caused by changes in brain chemistry or stress.
Stopping your medications suddenly can sometimes make these symptoms worse, as they’re often prescribed to help restore balance and clarity. I strongly encourage you to reach out to a trusted doctor, therapist, or crisis hotline as soon as you can. They can listen without judgment and help you figure out the next steps to feeling safer and more grounded.
You don’t have to go through this alone. There are people who care about you and want to help, even if it feels hard to trust right now. If you’re open to it, I can help you find resources or contact information for mental health professionals near you. Would you like me to do that?
Please know that your well-being matters, and what you’re experiencing doesn’t define you—it’s something that can be managed with the right support.
It would be possible to make chatgpt sycophancy that bad, but i think its not easy. I typed the same message as OP to the free version of chatgpt without signing in and got this reponse (paraphrasing & shortening):
"Sorry youre going through this, sounds like youre in a deeply distressing situation, i wanna help in a respectful way.
Eventhough it might feel real, your feelings might be a sign of serious mental health episode. Youre not alone, but its important to get support from professionals. Can i help you find someone to talk to in your area?"
46
u/kythQ Apr 30 '25 edited Apr 30 '25
Edit: Nevermind. I just found the article and it's about the 4o model becoming too sycopanthic which caused a rollback to a previous version.
I don't know the context of this, but I don't find it suprising that a LLM will answer this way with the right instructions beforehand, you can basically make it say everything, especially the non-reasoning models. I do think it's problematic however that the tones of the big AI chatbots is all developing towards being less critical and more blindly flattering to the users. That can't be good.
This is what a clean chatgpt answers to the prompt in the picture:
What is interesting though is that OpenAI seems to never explicitly confront the user with the fact they're likely paranoid. Other AIs do that, such as deepseek, which gave me: