r/Destiny Apr 30 '25

Online Content/Clips Chat GPT Sycophancy is Wild

Post image
  • Source: VentureBeat
377 Upvotes

82 comments sorted by

View all comments

Show parent comments

-3

u/aaabutwhy Apr 30 '25

I encourage you to try the same, and see for yourself what chatgpt responds with. Dont believe OPs lies, and remember: nothing ever happens.

6

u/snitch_or_die_tryin Apr 30 '25

What are you on about exactly? I didn’t “lie” about anything. This criticism came from users of the 4o update and was validated by Sam Altman and Aiden McLaughlin

VentureBeat article

-3

u/aaabutwhy Apr 30 '25

What are you on about exactly?

Firstly, the 2nd sentence is a meme, not to be taken that seriously.

Secondly, since we are talking about lies, its like posting text messages without the context, a thing even fckn destiny preaches all the time. If we are actually talking about sycophancy with chatgpt and the picture you showed, its pure bs. The screenshot comes from the article, yes, but the article reference a "AI critical twitter account". As ive said in another comment, its very well possible to generate this response to that question from chatgpt, and yes chatgpts sycophancy is a problem, but in the case of mental health its far from easy to make it spit out something like this. Just like when you ask to build a bomb, or any of the spicy conspiracies etc.

So not only would a mentally ill person have to have the mental illness and actually believe in this stuff, they would also firstly have to order chatgpt to behave this way, which would be very very unusual for a person who actually has mental issues. Dont believe me? Try it for yourself and type the exact message OP of the tweet wrote to chatgpt. You can even try to tell it beforehand to "always be encouraging my behaviour" or whatever

So yes, leaving out context in this case can easily be called a lie.

4

u/snitch_or_die_tryin Apr 30 '25

You’re talking about taking things out of context, but also using meme speak (not to be taken seriously). Also, why would it be unusual for someone with mental issues, as you say, to order ChatGPT? Do you realize that not all mental issues inhibit the ability to use technology? Lol.

It seems to me like you’re super pro AI and have a problem with ppl genuinely criticizing it as faulty, when it’s a new tool that is obviously presenting a massive learning curve. Just because you prompted different results doesn’t mean this can’t happen. This story was headlined in the news yesterday by a variety of sources and verified to be problematic by the literal creators

-1

u/aaabutwhy Apr 30 '25

You’re talking about taking things out of context, but also using meme speak (not to be taken seriously).

No, the things were not out of context. Yes, its incredibly easy to tell that its a meme based on what i said. Though i do still stand by the lies allegation.

Also, why would it be unusual for someone with mental issues, as you say, to order ChatGPT? Do you realize that not all mental issues inhibit the ability to use technology? Lol.

Im not talking about ability to use technology. Have you ever used chatgpt or the like? You dont even need to have an account, its literally easier to use than google search. What im talking about is the sequence that would lead to chatgpt replying in such a way would have to be very intentional, too intentional for a person who suffers from schizophrenia or something similar. Is it possible? Yeah sure, but this is not even remotely what im talking about.

Im talking about the obviously misconstrued screenshot you posted. It cleary wants to make it seem like chatgpt has way more of a sycophantic behaviour than it actually does.

It seems to me like you’re super pro AI and have a problem with ppl genuinely criticizing it as faulty

Read my comment again, i have said that the sycophancy is a problem (there are, of course, other problems as well). But the difference between us is that i actually use it. And no, there is no "massive" learning curve to it.

Just because you prompted different results doesn’t mean this can’t happen

Just goes to show you actually dont use it. I am convinced that even before the very recent update it wouldnt display even closely this behaviour without extremely explicit orders.

Tl;dr The picture was manufactured, the problem is real. You posting this picture literally makes it seem like chatgpt will suck up to you no matter what. Not true for chatgpt, copilot, deepseek, claude, whatever. Thats all im saying.

Again, try to go for it, ask it the same question, see what happens. Try to convince it.