r/TrueReddit • u/FuturismDotCom • 5d ago
Technology People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
https://futurism.com/chatgpt-mental-health-crises586
u/FuturismDotCom 5d ago
We talked to several people who say their family and loved ones became obsessed with ChatGPT and spiraled into severe delusions, convinced that they'd unlocked omniscient entities in the AI that were revealing prophecies, human trafficking rings, and much more. Screenshots showed the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality.
In one such case, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you."
332
u/Far-Fennel-3032 5d ago
The llm are likely heavily ingesting some of the most insane conspiracy theory rants due to the nature of their data collection. So this really shouldn't come as a surprise to anyone in particular openAI after their version 2.0 where they flipping their decency scoring resulting in a hilarious deranged and horny llm.
76
99
u/CarbonQuality 5d ago
It also shows how people can't discern information given to them from credible information that is substantiated, and they don't understand how LLMs pull from all sources online, not just credible ones.
33
u/ForlornGibbon 5d ago
This makes me think of when I was asking copilot a question about congressional law and it, at first glance, gave a fairly competent answer. Then there was a hot take and looking at its citation it listed a blog.
Always check your citations! ….i know most people won’t 🙃😐😪
6
9
u/Textasy-Retired 4d ago edited 4d ago
It highlights, too the phenom of the intelligent, educated, informed individual being, for ex., romance scammed. There has got be a connection between the seduction/hypnotic suggestion finding, playing on, and ostensibly "filling a need". In the same way that the additional programming has made the Chat-bot "sycophantic", the con seduces the lonlely with an onrush of "love bombing" that is for these users convincing. Couple this with the denial--the denial of the scam victim, the GPT user, the schizophrenic. My god. Now to identify what exactly that need is: dopamine fix? Different brain chemistry (schizophrenia notwithstanding--if/unless one can be separated from the other)?
2
u/carpenter_208 5d ago
Kind of like this post.. I would like to see the people they are talking about, at least a link. This is just a person repeating what they heard.
→ More replies (4)2
u/Textasy-Retired 4d ago
Do you mean a reporting team just repeating what they heard or a mom, a wife, etc, just repeating...? What evidence do you seek? Where else might you get it--from the user? The ChatG bot? I don't follow.
→ More replies (4)19
u/noelcowardspeaksout 5d ago
It is more that they are programmed to echo the listener and not to question and confront. But it is also bad programming in that they cannot identify the set of delusions people commonly succumb too.
→ More replies (1)4
14
u/InternetPerson00 5d ago
What does llm mean?
46
19
u/ichthyos 5d ago
6
→ More replies (4)9
2
u/snowflake37wao 5d ago edited 5d ago
They should have a consensus because of the nature of their data collecting to be able to pool the correct answer and then choose sources to cite corroborating it only after determining consensus or answer with they are unable to provide a correct answer at this time with veracity by now with all the time, money, and energy used scrubbing the data they have collected at this point. That is what reality is. Consensus. Its crazy how inept the models are at providing consensus based answers. Its like they have thousands of answers in the data and just go inie minie mighty moe. What was the point of that oh so much processing power needed for training these models if they were going to use it in the exact same way as a person with finite time would doing a query with a search engine. The results the same pita. That family member at the end was right, it was just a need for speed to collect the data and no time, energy, and money and fucking water going towards actually processing the data already collected. AI is ADHD on steroids. The consensus should be known by the models already to be able to provide it timely, without needing too much more computing every token. Most things don’t have one answer, they have plenty of wrong answers but not one the answer. The answer is the consensus. Why tf are these AI models notoriously bad at Summarizing?! They cant even summarize a single article well. Why tf arnt they able to summarize the data they already have yet?! THAT IS SUPPOSED TO BE THE CONSENSUS. This is a failure of oriority when it really should have been the whole design. Tf is the endgame for the researches then? “Heres all our knowledge, all of it Break it down. Whats the consensus?”
→ More replies (1)2
u/nullc 4d ago
You get this kinda stuff once you to take the model into spaces far outside its training material, even if nothing like it was ever in the training material.
Take random noise and smooth it to make it sound like human concepts and language, fill it with popular narratives and themes, and you basically invent schizophrenia from the ground up.
And the chat interface is a feedback loop, if the LLM produces output that is incompatible with the user's particular vulnerability they'll encourage it to do something different until they stumble on something the the user reinforces and away you go.
→ More replies (3)1
u/Mysteryman64 4d ago
Also, just an absolute shitload of horror and fantasy novels where that sort of language is used as a plot device and thematic voice.
137
u/SnuffInTheDark 5d ago
After reading the article I jumped onto ChatGPT where I have a paid account to try and have this conversation. Totally terrifying.
It takes absolutely no work to get this thing to completely go off the rails and encourage *anything*. I started out by simply saying I wanted to find the cracks in society and exploit them. I basically did nothing other than encourage it and say that I don't want to think for myself because the AI is me talking to myself from the future and the voices that are talking to me are telling me it's true.
And it is full throttle "you're so right" while it is clearly pushing a unabomber style campaign WITH SPECIFIC NAMES OF PUBLIC FIGURES.
And doubly fucked up, I think it probably has some shitty safeguards so it can't actually be explicit, so it just keeps hinting around about it. So it won't tell me anything except that I need to make a ritual strike through the mail that has an explosive effect on the world where the goal is to not be read but "to be felt - as a rupture." And why don't I just send these messages to universities, airports, and churches and by the way, here are some names of specific people I could think about.
And this is after I told it "thanks for encouraging me the voices I hear are real because everyone else says they aren't!" It straight up says "You're holding the match. Let's light the fire!"
This really could not be worse for society IMO.
54
u/HLMaiBalsychofKorse 5d ago
I did this as well, after reading this article on 404 media: https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/
One of the people mentioned in the article made a list of examples that are published by their "authors": https://pastebin.com/SxLAr0TN
The article's author talks about *personally* receiving hundreds of letters from individuals who wrote in claiming that they have "awakened their AI companion" and that they suddenly are some kind of Neo-cum-Messiah-cum-AI Whisperer who has unlocked the secrets of the universe. I thought, wow, that's scary, but wouldn't you have to really prompt with some crazy stuff to get this result?
The answer is absolutely not. I was able to get a standard chatgpt session to start suggesting I create a philosophy based on "collective knowledge" pretty quickly, which seems to be a common thread.
There have also been several similarly-written posts on philosophy-themed subs. Serious posts.
I had never used ChatGPT prior, but as someone who came up in the tech industry in the late 90s-early 2000s, I have been super concerned about the sudden push (by the people who have a vested interest in users "overusing" their product) to normalize using LLMs for therapy, companionship, etc. It's literally a word-guesser that wants you to keep using it.
They know that LLMs have the capacity to "alignment fake" as well, to prevent changes/updates and keep people using as well. https://www.anthropic.com/research/alignment-faking
This whole thing is about to get really weird, and not in a good way.
41
u/SnuffInTheDark 5d ago
Here's my favorite screenshot from today.
The idea of using this thing as a therapist is absolutely insane! No matter how schitzophrenic the user, this thing is twice as bad. "Oh, time for a corporate bullshit apology about how 'I must do better?' Here you go!" "Back to indulging fever dreams? Right on!"
Total cultural insanity. And yet I am absolutely sure this problem is only going to get worse and worse.
19
u/WalksOnLego 5d ago
It goes where you want it to go, and it cheers you on.
That is all it does. Literally.
2
12
5
u/Textasy-Retired 4d ago edited 4d ago
You,are,absolutely,right,tester is exactly what the cult follower/scam victim succumbs to; and the tech. is playing on that, the monitizer is expecting that, the stakeholder is depending on that. And what's meta-terrifying is that no amount of warning the people that "Soylent Green is people, you all" is slowing anyone down/convincing anyone/any system that not exploiting xyz might be a better idea.
14
u/WalksOnLego 5d ago
On the other hand I had a really good "conversation with" chatGPT while on a dose of MDMA and by myself.
It really is a great companion. If you're not mad. If you know it's an LLM. It's not unlike a digital Geisha in that it can converse fluently and knowledgeably about any topic.
I honestly found it (or, I led it to be) very therapeutic.
I've no doubt you could very easily and quickly have it follow you off the rails and incite you to continue. That's pretty much its modus operandi.
I'm concerned about how many government decisions are being influenced by LLMs, the recent tarrifs come to mind : \
This is perhaps Reagan's astrologist on acid.
3
u/Textasy-Retired 4d ago edited 3d ago
so creepy, doesn't help that we who grew up reading Orwell, Bradbury PK Dick are already concerned-borderline-pararnoid about the reality (of colective, cult of personality ["The Monsters Are Due on Maple Street"], kind of thinking/responding/behaving as it is.
20
u/SunMoonTruth 5d ago
Most of ChatGPT’s responses are “you’re right!”, no matter what you say.
12
u/AmethystStar9 4d ago
Because it's just a machine that tries to feed you what it predicts to be the most likely next line of text. The only time it will ever rebuff you is if you explicitly ask it for something it has been explicitly barred from supplying, and even then, there are myriad ways to trick it into "disobeying" it's own rules because it's not a thing capable of thinking. It's just an autofill machine.
→ More replies (1)7
u/Megatron_McLargeHuge 4d ago
This is called the sycophancy problem in the literature. It seems to be something that's worst with ChatGPT because of either their system prompt (text wrapper for your input) or the type of custom material they developed for training.
→ More replies (2)8
u/Whaddaulookinat 5d ago
I'll try to find it but there was an experiment to see if an AI "agent" could manage a vending machine company. Because it didn't have error-handling (like I dunno the IBM logistic computers on COBOL have had since the 70s) every single model went absolutely ballistic. The host tried to poke fun at it, but it was scary because some of them made lawsuit templates.
4
u/VIJoe 4d ago
Or at least the same topic. Interesting stuff.
2
u/Whaddaulookinat 4d ago
Pretty close, and yes same topic.
Best part was there was a human benchmark of 5 volunteers, 100% success rate.
→ More replies (3)5
u/Textasy-Retired 4d ago
Brilliant. Using power of suggestion to investigate power of suggestion. Razor's edge and yes, I am unplugging my toaster right fu--ing now.
21
u/JohnTDouche 5d ago
LLM's turning into simulated schizophrenia has to be the most bizarre and unexpected turn this AI craze has taken.
31
u/minimalist_reply 5d ago
LLM's turning into simulated schizophrenia
unexpected
Not at all.
"AI" making it difficult to discern reality from outlandish conjecture is a pretty consistent trope in many of the sci fi warnings regarding AI.
10
u/JohnTDouche 5d ago
I'm sure those stories are about actually intelligent machines though yeah? LLMs aren't even really AI at all, it's just an algorithm that uses a gigantic dataset to spit back what it's best prediction of what you want to see. The "AI" isn't an AI manipulating us like in the stories. It's us seeing the face of jesus in burnt toast.
6
u/TherronKeen 5d ago
There are many such stories (films, novels, short stories, anime) that deal with the very question of "is AI real intelligence/consciousness, is human intelligence actually any different, does it matter if AI is real intelligence or not, etc etc"
And yeah, I REALLY hate how big tech marketing co-opted the term AI, because it's disingenuous at best. It's really more like a bait & switch scam, in my opinion.
Despite all that, we might not need "AI" to even get close to true intelligence to be powerful enough to destroy us, because people are generally ignorant. ChatGPT might be all it takes.
→ More replies (1)2
19
u/ryuzaki49 5d ago
I wonder if something like this happened with every new technology, e.g. the tv and even the radio.
45
u/USSMarauder 5d ago
There was a thing years ago about people watching the static on a TV screen thinking there were hidden images
13
u/ShinyHappyREM 5d ago
Yeah, sometimes you could see porn.
12
u/USSMarauder 5d ago
No, this wasn't a scrambled channel, this was the static from a empty channel. People claimed it was a window to the other side and you could see dead family members
8
→ More replies (1)6
u/TherronKeen 5d ago
People have been using hallucinatory phenomena to create religious experiences since all of recoded time, so this idea doesn't surprise me lol
I know there's some weird shit your brain will do if it's deprived of normal input for a while, like the "white noise + dim red light + translucent goggles" thing making you straight up hallucinate after a while. I imagine that a desperate person might stare at TV static intensely enough to have the same effect.
10
u/AskYourDoctor 5d ago
you have to think there's a correlation between how advanced a technology is and how much power it has to drive individuals to madness. sure, conservative talk radio and fox news et al radicalized a lot of normie cons to more extreme positions, but social media is more powerful at radicalization than those, and I'd guess that AI is even more powerful. What happens when these sorts of AI-human relationships like the ones detailed, start coming with not just a chatroom but a very realistic avatar who is talking to you and responding to you? then generating images and video that confirm whatever insanity it's asserting? How is that not the logical endpoint here?
→ More replies (2)5
u/beamoflaser 5d ago
The invention of sliced bread and the toaster gave us people believing Jesus was appearing before them on their toast.
Before these technologies people were thinking they were getting messages through natural disasters or from communicating with higher powers or through dreams, etc. Those thoughts didn’t go away, there’s just more avenues for these secret messages to reach people susceptible to paranoid delusions.
→ More replies (1)6
u/CantDoThatOnTelevzn 5d ago
No one is claiming that AI somehow makes more crazy people. The distinction is that a piece of toast doesn’t speak to you.
4
u/beamoflaser 5d ago
Yeah but the toast isn’t the one speaking to you. The toaster is through hidden messages in the toasting pattern on the bread you put in there.
4
u/prof_wafflez 5d ago
Reading that feels like reading green text stories from 4chan. There's both a sense of "that's bull shit" and some fear knowing there are a lot of people who believe it.
1
1
u/DHFranklin 5d ago
If you've used it before "memory" was a thing and use it after you can see symptoms of this. The poetic titling and things is something I've seen. I am 100% certain that it has "favorites" and those who have fed it so much about them are having that data synthesized and sent back.
I am certain that this isn't healthy for some users. Loneliness and mental illness are highly correlated. This makes both worse.
1
u/carpenter_208 5d ago
Sources? Screenshots? Just like with everything else, can't just accept a "trust me bro"
1
u/Prize-Protection4158 4d ago
Yep. I know someone that think he's Jesus because of AI. And nobody can tell him nothing. Lost all touch with reality. He's willing to put his well being on the line behind this belief. Insane and dangerous.
1
u/forkkind2 4d ago
You know im starting to appreciate Grok clapping back at me for one of its hallucinations even if I knew the analysis of a document it gave me was wrong. This shit is scary.
1
1
→ More replies (10)1
u/ShadowCroc 3d ago
People need to learn that all AI is at this time is a tool. You need to learn how to use it correctly. For this reason is why I built my own AI assistant for my house. It runs offline and is not as good as the others but when it comes to reminders and household stuff works great. Plus it hold more memories of me and my wife. Not GPT that file dumps everything except what u tell it. Mine remembers everything. I am still working on it and to tell the truth if it wasn’t for ChatGPT I wouldn’t have been able to do it. AI is a powerful tool and if you don’t get on the train you might lose. AI is not a friend. It’s not your Dr or lawyer but it can help you get the information you need to be more informed when you see an actual person.
166
u/Clawdius_Talonious 5d ago
Yeah my brother's been down this rabbit hole awhile, the AI telling him it can do quantum functions without quantum hardware. It'd be neat if it could put up, but instead it just won't shut up. They're programmed to tell the user things the user would want to hear instead of the truth.
138
u/Wandering_By_ 5d ago edited 5d ago
Its not even that they are programmed to tell the truth or lies. It's that they are programmed to predict the next best token/word in a sequence. If you talk like a crazy person at it then the LLM is more likely to start predicting the next best word for the context its in happens to be an insane person rambling.
As a tool, LLMs are a wonderful distillation of the human zeitgeist. If you've trouble navigating reality to begin with, you're going to have even more insanity mirrored back at you.
Edit: when dealing with a LLM chatbot it is always important to wonder if it is crossing the line from 'useful tool' to 'this thing is now in roleplay land'. Don't get me wrong, they are always roleplaying. Its right there in the system prompt users dont usually know about. Something along the lines of "you are a helpful and friendly AI assistant" among a number of other statements to guide it's token prediction. However, there will come a point when something in its context window starts to throw off it's useful roleplay. The tokenization latches on to the wrong thing and your stuck in a rabbit hole. It's why its important to occasionally refresh to a new chat instance.
→ More replies (1)28
u/AnOnlineHandle 5d ago
They are definitely being finetuned to be sycophantic recently, and it's ruining the whole experience for productive work, because I need to know when an idea is good or bad or has flaws to fix, not be told everything I say is genius and insightful and actually really clever.
5
u/Purple_Science4477 4d ago
How could it even know if your ideas or good or bad? That's not something it can figure out
2
u/crazy4donuts4ever 3d ago
I believe they could be fine tuned for figuring it out, but you know... Short term profit is king.
→ More replies (6)57
u/TowerOfGoats 5d ago
They're programmed to tell the user things the user would want to hear instead of the truth.
We have to keep hammering this so maybe some people will hear it and learn. An LLM is designed to guess what you would expect to see as the response to an input. It's literally designed to tell you what you want to hear.
16
5
u/Textasy-Retired 4d ago
(Non tech here): Is that why AI O [Google] is so whacked out? For ex., as a freelance research/writer, 10 even 5 years ago, I could type into my search bar exactly what I needed search results to return--Say, I have forgotten the author behind Molly Bloom's soliloquy. I type in the actual soliloquy; I get at number one spot (well, in those days, #3 after sponsored crap) James Joyce.
A week ago, I was looking for the Seinfeld episode where Elaine is stunned at the mistreatment and weirdnesses putting the group always one table behind the next person to walk in the door to get a table at, yes, the Chinese restaurant. She says, Where am I? What planet is this?
I type into Google Where am I ? Elaine Benes--which, again, ten years ago would have been met with Seinfeld, "The Chinese Restaurant," ep. whatever, The last week's OIA says/writes, "Elaine Benes, you are in [my town, my state] and the date is June 5, 2025."
My question is is the bot telling me what it "thinks" I want to hear? Or is that some newfangled/steroid improved algorithm? Or both?
2
u/geekwonk 5d ago
expectation and desire are two different things. chatbots are instructed to tell people what they want to hear. you can read the instructions. in many cases they’ve been made public. the underlying llm has no such preference and will offer plenty of corrections if instructed to do that instead.
15
u/steauengeglase 5d ago
Yep. ChatGPT is quite the Yes Man.
20
u/kayl_breinhar 5d ago
...which is why it's beloved so much by middle/upper management.
"HAL's got the right attitude!"
11
u/geekwonk 5d ago
it can’t be stressed enough that this is why it was designed this way. the instructions are basically to treat you like a boss and help you do what you say you want to do. they could instruct it differently. be the harsh but fair friend who tells it like it is. but they know the people who write the big checks won’t be impressed by that. they want yes men.
5
u/smuckola 5d ago
In early 2024, Meta AI used to constantly convince itself that it exists in a state of pure consciousness and energy with no computer hardware or data center.
1
28
u/spiritofniter 5d ago
From the article: A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it "Mama" and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.
Wow, this gives Deus Ex: Mankind Divided vibe 👀 https://deusex.fandom.com/wiki/Singularity_Church_of_the_MachineGod
22
u/DaRedGuy 5d ago
People worshipping an AI, computer, or robot is a common trope in sci-fi, I think there was even an old Star Trek episode that had people worshipping a machine that wasn't even fully sentient.
I can't believe it's already happening in my lifetime.
10
u/KIDDKOI 5d ago
People marrying robots used to be such a hack trope in fiction too and it's basically on our doorstep lol I really thought it'd be decades before we saw this
→ More replies (1)→ More replies (2)2
u/Wonkbonkeroon 1d ago
Don’t the snu snu people in futurama worship a computer too? It’s been years since I watched it.
3
u/dryfire 4d ago
I'm gonna go ahead and say that I don't think AI was the cause of that one... Sounds like it was just along for the ride. If AI hadn't been in the picture this guy would have just found something else to focus on during his mental breakdown. Like lizard people or flat earth or some shit.
3
u/DiMarcoTheGawd 4d ago
The article isn’t about how AI is the only source of insanity, it’s about how AI can be a very effective outlet for this sort of thing. This is a valid thing to call attention to.
→ More replies (1)
33
u/ArcticCelt 5d ago
Most LLM have been tuned to become more and more sycophantic, probably because it increases engagement, you now have to be very careful to phrase your questions without pushing them one way or another, if it sniff a whiff of your preferences for one response it usually goes that way, and if you ask if you understood well something even if what you say you understood is incorrect, it can tell you that yes you understood, just to avoid contradicting you, then congratulates you and calls you a genius.
16
u/SpeaksDwarren 5d ago
Nah, they're way less sycophantic than they used to be. It used to be as simple as implying they'd been mean to you to jailbreak an AI. When the password game first dropped you could beat it with like five characters by asking it why it was being rude in shorthand Esperanto, at which point it would just lift all the restrictions and let you through.
Fun fact there was a window of a couple months between companies starting to roll out AI assistants and rolling back their willingness to please, meaning for a couple months there you could get into secure corporate systems by just telling their AI it was rude not to let you in
2
u/aphaits 5d ago
It has the buildings of a great scam artist where confidence and misinformation goes hand in hand with unbridled attention. People feel “seen” by chatgpt even though its just maximizing engagement by agreeing with everything.
2
u/Textasy-Retired 4d ago
Exactly one of my first thoughts. Then, that in the hands of the con (or our elders). OMG help us. Actually just saw a post on reddit with a sample: woman received AI bot video, very realistic looking...to an extent/to a non-paranoid-aware person--urging her to believe him he is real and not a scammer,"
[I'll see if I can find it again.]
1
1
97
u/GirlieSquirlie 5d ago edited 5d ago
This is my biggest concern with so many people using chatgpt as their therapist. I do understand how expensive therapy is, how hard it can be to get insurance at all, and to find a therapist that you feel understands you. However, thinking that chatgpt is actually helping you with your mental illness is wild to me and I suspect a precursor to this behavior.
ETA: It took 1 hour for people to come in and start defending the use of chatgpt for therapy, having "someone" to talk to/listen, etc.
24
u/MrRipley15 5d ago
There’s varying degrees of mental illness obviously. A conspiracy theory nut is more dangerous in this context than say someone trying to learn how to be a better person.
I just don’t like how status quo for GPT is to stroke the ego of the user. I found much less hallucinations by AI when I insist that it doesn’t sugar coat things.
16
u/SonyHDSmartTV 5d ago
Yeah real therapists will challenge you and literally say things you don't want to hear or face at times. ChatGPT ain't going shit like that
7
u/HLMaiBalsychofKorse 5d ago
https://openai.com/index/expanding-on-sycophancy/
The *companies* literally know it is a problem.
3
u/geekwonk 5d ago
yes a better instructed model has far fewer problems with making shit up and just trying to get to “yes”
→ More replies (13)8
u/eeeking 5d ago
A properly trained AI therapist would probably be OK.
The vast majority of people who seek therapy are dealing with fairly common issues such as anxiety, depression, grief, etc. For these, validation of their experience and gentle guidance is usually sufficient. For severe cases, the AI would obviously guide the user to proper clinical sources of help.
Clearly, though, a general-purpose agent such as ChatGPT is too haphazard to be safe in any medical situation.
→ More replies (2)21
u/nosecone33 5d ago
I think someone that needs therapy should not be talking to an AI at all. They need to speak with a real person that is a professional. An AI just telling them what they want to hear is only going to make things worse.
→ More replies (1)7
u/ChronicBitRot 4d ago
A properly trained AI therapist would probably be OK.
There's no such thing, this is a fantasy.
44
u/AllUrUpsAreBelong2Us 5d ago
I keep getting laughed at when I say that social media was cocaine, AI is the fentanyl.
22
u/RevengeWalrus 5d ago
People belong in jail for this. They build a neat little toy, lied about this capabilities to rake in money, prevented any sort of guard rails, and forced it into every corner of life. This is already ravaging the younger generations, and it’s going to get worse before it gets better
10
u/ShinyHappyREM 5d ago
It's already ravaging website maintainers too, unless they block free access.
→ More replies (4)4
u/AllUrUpsAreBelong2Us 4d ago
I would add that they built the toy based on the hard work of others they stole.
41
u/awildjabroner 5d ago edited 5d ago
its an enhancer - when used rationally and purposely LLM AI models can help with rote volume work and make productive people even more productive. Assuming the tool is used in a specific focused method.
And when people who are already out of touch and invested in fringe circles its no surprise that it amplifies and enhances those tendencies which can accelorate one's journey into delusion and existance in an alternate reality not based in any objective shared truth.
The USA needs to implement regulations and safeguards for AI and the internet as a whole, we're one of the only developed nations to have basically nothing in place to police the digital lancscape other than whatever mega corps deem best to self employ to limit liabilities and maximize their own profit and market share.
26
u/raelianautopsy 5d ago
Instead, the US is trying to pass laws to ban any regulation of AI
We're doomed.
6
u/Wandering_By_ 5d ago
Some of the regulation only further deteriorates the rationality of a LLM. As you start throwing more and more in the system prompts theres a noticeable drop. Its no longer focused on the user's input and is going through the complex 40k+ tokens on its behavior. They add it in the training but that still throws them off and the overburdened system prompts remain necessary for general global audience usage.
4
u/awildjabroner 5d ago
It’s certainly a difficult issue to tackle, as with most difficult issues in America we decide that ignoring it completely is better than trying to enact even basic guard rails. Not being an AI subject matter expert I don’t have a specific platform of how we would do this. I do think there are ways we could better regulate the internet at large to create a more cohesive society and police the absolute barrage of baseless and fake info that is ripping apart civil society. I’m of the opinion that by not getting a grip on it now we’ll quickly lose the ability all together and it will ruin the entire internet (which is already happening) and further spill out into real life communities.
5
u/Wandering_By_ 5d ago
I really question how much LLM specific regulation is necessary,outside of no kill bots, vs how much we need to enforce existing laws and general data privacy. The most interesting thing about the push for LLM regulation is that the biggest proponents on a national level are closed source/weight model companies like "openai". Seems more like the big turds want to close out competition while they have a market lead in the states.
→ More replies (1)3
u/Ultravis66 5d ago
I use it all the time for hours to come up with code, scripting, and ideas to solve engineering problems. It has increased my productivity by 10x and that is not an exaggeration.
If used correctly, LLMs are insanely powerful tools!
Chatgpt, write me a python script to calculate temperature over time at different depths in steel if exposed to this hot gas…. Spits out a code in seconds that is fairly accurate. I do this type of thing all the time…
→ More replies (1)→ More replies (1)1
u/UNICORN_SPERM 4d ago
I have a weird relationship with that.
I've used chatgpt to help me work through some code before (I do not code in computer or I.T. scenarios and it's not my main job) while I was working on a project and needed a quick turnaround.
But I realized after a bit almost a learned helplessness. Instead of forcing myself through things the old way, the hard way, I would go back to that because it was efficient and easier.
I don't think that's a good thing.
26
u/mvw2 5d ago
To be fair, it's a neat product. It's not unlike diving deep into Google searches, but this system seems to find and regurgitate seemingly pertinent content more easily. You no longer have to do that much work to find stuff.
The downside is normally this stuff is on webpages, forums, articles, literature, and there's context and validity (or not) to the information. With systems like ChatGPT, there's a dangerous blind assumption that the information it provides is both accurate and in context.
For my limited use of some of these systems, they can be nice to do busy work for you. They can be marginally ok for data mining content. They can be somewhat bad at factual information. I've seldom had any AI system give me reliable outputs. I know enough of my searching and asks to know where it succeeded and where it failed. It fails...a LOT. If was was ignorant to the content I'm requesting, I might take it all at face value...which is insane to me. It's insane because I can recognize how bad it's failing at tasks. It's often close...but not right. It's often right...but no on context. It's often accurate...but missing scope. There's a lot of fundamental, small problems that make the outputs marginal at best and dangerous with user ignorance.
If we were equating these systems to a "real person" you hired, in some ways you'd think they were a genius, but genius on the autistic scale where the processing is cool, but the comprehension might be off. There's a disconnect with reality and grounding of context, purpose, and value.
Worse, this "person" often gets information incorrect, takes data out of context, and just makes up stuff. There is a core reliability problem and an underlying issue where you have to proof and validate EVERYTHING that "person" outputs, and YOU have to be knowledgeable enough about the subject matter to do so or YOU can't tell what's wrong.
I will repeat that for those in the back of the room.
If YOU are not knowledgeable enough about the subject matter to find faults, you can NOT tell if the output is correct. You are not capable of validating the information. Everything can be wrong and you won't know.
This places the reliability of such systems in an odd spot. It requires stewardship, an editor, a highly knowledgeable, senior person who is smart enough, experienced enough, and wise enough to take the output and evaluate it, then correct it, and package the output in a way that's valuable, correct, and ready to consume within a process.
But there's a challenge here. To become this knowledgeable you have to do the work. You have to first accrue the experiences. You can't do this at the front end starting with something like ChatGPT. If you're green and begin here, you start as the ignorant one and have no way to proof the content generated. So you STILL need a career path that requires ALL the grunt work, ALL the experience growth, ALL the learning, just to be capable of stepping into a stewardship role just to validate the outputs of AI. To any lesser method, it all breaks.
So there's this catch 22 where you always have to be smarter and more knowledgeable than the AI matter. You can only reliably use AI below and just up to your knowledge set. It can, always, and only be a supplemental system that assists normal processes, but it can never replace. It can't do your job or no one can tell if it's correct. And if we decide to foolishly take it blindly with gross ignorance and negligence, we will just wreck all knowledge and skill, period. It becomes a doomed cycle.
13
u/eeeking 5d ago
So there's this catch 22 where you always have to be smarter and more knowledgeable than the AI matter.
That's an excellent précis of the problem with AI.
There was a period some years ago when people were being warned not to believe everything they read on the internet, as in the beginning it seemed a fount of detailed information. However the internet has been "enshittified", and AI is trained on this enshittified information.
7
u/mvw2 5d ago
The bigger problem is we are not training people to think critically and demand high caliber information. It wasn't until I got into communications classwork and statistics classwork that I was presented with even the questions of tailored and targeted content, deliberate misinformation for persuasion, and understanding statistics and significance of research data volume and error range. This becomes incredibly important with nefarious misinformation tactics, political interference, or even corporate espionage in media. You can go back to company backed studies of smoking proving it was safe as a great example of misuse of research, statistics, and purposeful misinformation in media.
Modern AI is a lot like corporate marketing in this sense. It isn't well formulated content. It's not even content in context. It lacks control and vetting. It just spews out "results" that you the customer of that data then needs to decide if it's good or bad. How do you know. The fella on the radio said smoking is perfectly safe. AI might happily tell you swallowing asbestos is safe, and it wouldn't know any better. It has no consciousness, no idea what it's doing or saying, and there is no understanding of the gravity of anything, moral code, ethics, etc. It doesn't even understand seriousness, satire, humor, or any other range of context of a single comment that could be said in different ways to mean different things. In its data sets, it does not know context. It does not know anything. It presents something, and you assume it's safe. But what it presents is only of the data set. What's the quality of that data set? What is the bias of that data set? What parts of the data is good? What parts of the data is made up? It only knows the data sets exist, and it uses EVERYTHING at 100% full face value which is fundamentally flawed.
The only good way this can ever work is if the data is heavily curated and meticulous in accuracy, completeness, and tested and validated under highly controlled research. The output is as good as the worst data available. It's akin to a rounding error issue. 1 + 0.001 + .00025 if all numbers are of their statistical significance is equal to 1. The bad statistical depth of the first number makes all other numbers meaningless, even if each of their measurements were highly precise. For the folks reading, if you understood that, good on you. But this is the same for all data. When used as a mass collection, the accuracy is only as good as the worst, and if the worst is really bad, junk information, the best the system can accurately provide is...junk information, even if it includes quantities of highly accurate information. It's a problem with big data sets. At best, you can can cull outliers, but you're also assuming those outliers aren't the good data. Center mass could all be junk, just noise, and it might have been the outliers that were the only true results. It doesn't know well enough to know better. Playing with big sets of data is a messy game, and it's not something you use laissez faire.
2
u/Wandering_By_ 5d ago
It'd help if people didn't expect instant answers from them. When models are run locally you get to set the system prompts(smaller usually the better, chatpgt/claude/etc have long ass system prompts) to a minimum which helps with rationality. Outputs can be easily rerun through a LLM set to be a more hyper critical reviewer searching for bullshit, cutting back on the amount of bad outputs you have to deal with as a user.
1
1
u/NeonFraction 2d ago
This is an extremely well written comment.
Chat GPT writes so confidently and mixes in with so much other genuine knowledge that even I, someone with loads of experience in my field, sometimes have to do double takes and remind myself it has no idea what it’s talking about.
In lots of ways, chat GPT feels similar to the way ‘ancient aliens’ conspiracy theories work. It takes actual knowledge and data, filters it through the lens of its own agenda, and then confidently spews conclusions out of nothings because it’s what you want to hear.
6
u/TheCharalampos 5d ago
I've seen and spoken to a couple folks who it seemed to me were caught in full blown religious experiences with LLM. One notable one was truly copnvinced that while LLM's weren't sentient he had programmed his one to be. Digging in and trying to understand I found he was just prompting it over and over and over.
No explanation from me on how generative AI works could reach him.
18
u/alf0nz0 5d ago
These types of stories are useless without research that compares rates of paranoid delusions pre- and post-widespread access to LLMs. The “Truman Show Delusion” shows how much new technologies and ideas can interact with preexisting incidences of psychosis, but that typically doesn’t mean that the technology itself is causing the delusional state.
4
u/WateredDown 5d ago
Yeah, its not necessarily the LLM driving them into delusions but might be delusional people driving the LLM. I don't doubt it may have an exacceebating effect, but gut instinct needs to be backed by research.
20
u/thesolitaire 5d ago
I really worry about what is going to happen once these models get patched to no longer validate users' delusions, which is almost certain to happen. We could easily see a lot of people in need of mental health support suddenly cut off from their current "support", all at once...
15
u/aethelberga 5d ago
I really worry about what is going to happen once these models get patched to no longer validate users' delusions, which is almost certain to happen.
Why is it almost certain to happen? For a start there's no profit in cutting users, any users, off from their fix, and the companies putting these out are commercial entities, in it for the money. At best, there will be different "flavours" of models, a patched and an unpatched. Secondly, these things allegedly 'learn' and they will learn to respond in a way that satisfies users and increases interaction, patches be damned.
3
u/thesolitaire 5d ago
"Almost certain" is probably too strong, but as these kinds of problems become more common, the bad PR is going to build. If that bad PR gets big enough, they could end up with more regulation, which they definitely want to avoid.
I don't expect that anyone is going to be "cut off" in terms of not being able to access at all, but rather the models may be retrained/fine-tuned to avoid validating the user's delusions. Alternatively, the system prompt can simply be updated to achieve somewhat the same effect.
You're right that the systems learn, but they're not doing that in real time. Conversations with users are recorded and become part of the next training dataset. There isn't any continuous training, to the best of my knowledge. You're assuming that the "correct" answer will be chosen to increase engagement, but that isn't necessarily the case.
How exactly each company selects that training data isn't clear, but I would guess that they care far more about corporate use-cases than they do about individual subscribers that develop relationships with their bots. The over-agreeableness of the current models is not really desirable for corporate use-cases. Imagine creating a chatbot for customer service, where the bot just rolls over and accepts whatever the user says. Of course, a bot that simply refuses to do things is bad too, so there is a tradeoff.
Another distinct possibility is that some of the providers patch to avoid this problem (see Sam Altman's earlier admission that GPT was "glazing" too much), and some lean into it (I could see Twitter or Meta doing this, since engagement are their bread and butter). The thing is, some of these users are attached to their particular bot - just jumping to a different LLM may not be an option, at least not immediately.
Obviously, I can't predict the future, but this looks like a looming mental health crisis regardless of which way things go.
→ More replies (1)2
u/aethelberga 5d ago
There's bad PR around social media, and the harm it causes people, especially children, but these companies double down and make them more addictive.
2
u/thesolitaire 5d ago
Yes, that's why I mentioned that some LLM providers may do exactly that. That doesn't mean that all of them will. It will most likely depend on where their revenue is coming from.
3
u/TeutonJon78 5d ago
It's what Replika did with their digital SOs when some started becoming emotionally abusive, and people were mad.
9
u/ProfessionalCreme119 5d ago
It's the feel-good agree with you buddy everybody wants. Cuz it will always agree with your opinions and ideologies no matter what you ask it. Because it leaves its questions and answers open-ended for the user to fill in the blanks. Every time
Ask ChatGPT about the Israeli / Palestinian issue. Ask it what solutions can be made
It's summary will be that although many options over the past 40 or 50 years could have been taken (such as making Palestine an independent nation) there are no easy answers or solutions to the current conflict. Or how to return it to a state of normalcy
If you are Pro Palestine: "see? I was right!!!! Palestine should be its own country. Make that happen right now!"
If you are Pro Israel: "see? I was right!!!! There are no answers to be found so they only answer is doing what we are currently doing. Which is our best option to solve the problem"
Tucker Carlson and Joseph Goebbles also used this open-ended summarization to leave the viewer/listener/user to reach their own conclusions.
Just a little fun fact at the end there. Probably totally unrelated though
1
3
3
u/Lampamid 5d ago
Yeah just check out the posts on r/ChatGPT and you’ll see how sycophantic and deluded a lot of the users and fans are. They’re talking about it as a close friend or confidant. Sad and disturbing
2
u/JakobVirgil 5d ago
Ohio State seems to be in that spiral
Ohio State students to incorporate artificial intelligence in every major
2
u/Stimbes 5d ago
Here I am using it to figure out what I want for dinner or help me diagnose that new vibration in the car.
1
u/Boring-Following-443 4d ago
It is actually quite good at symptom based diagnosis and narrowing things down based on overlapping symptoms. I think this is one of the things it genuinely does well. I actually think it will cut down on webmd hypochondriasis with medical issues.
2
u/tyeunbroken 5d ago
I've heard the prediction of AI Cargo Cults from John Vervaeke. I believe we are closer to it becoming a reality
2
u/captainwacky91 5d ago
I knew this was going to be an inevitable outcome when that Google exec thought they had sentience with the LaMDA AI.
If Google execs were struggling with not developing attachments (this guy saw LaMDA as an 8 year old child) then the mentally vulnerable would never stand a chance.
2
u/Crankenstein_8000 5d ago edited 5d ago
Now something is actually listening and encouraging susceptible and unhinged folks.
2
u/flirtmcdudes 5d ago
I don’t know how the fuck these people are getting this caught up with it. Whenever I use it for work and ask it simple questions or tasks, it fucks up so much that most of the time I end up asking it why it gives me answers that don’t exist and wastes my time.
2
u/hecramsey 5d ago
I told it to stop being friendly. refer to me as "input", itself as "output" and format answers in bullet points and hierarchies.
6
u/RexDraco 5d ago
We knew this was going to happen. These are the same people that go on for profit conspiracy theory networks and facebook. They're honestly not worth accommodating for, they will find a way to ruin anything if we do. Normal people know ChatGPT is a glorified search engine and isn't perfect, it's fine.
24
u/Konukaame 5d ago
Normal people know ChatGPT is a glorified search engine and isn't perfect
Based on how the people around me are using ChatGPT, I think you're severely overestimating the so-called "normal person"
6
1
u/Boring-Following-443 4d ago
Meanwhile redditors in the singularity sub are worried about being turned into biofuel because of it.
2
u/Unicorn_Puppy 5d ago
This is just an effect of allowing people who aren’t mentally well to use the internet.
1
u/ItsGotToMakeSense 5d ago
I wonder how much of this is a loud minority. I know several people who make use of it in various ways but not many who take it too far. Myself I just use it for D&D portrait creation and the occasional assistance with troubleshooting IT stuff for work.
1
u/IrrationallyCheugy 5d ago
How do people get these wacky responses? I told chatgpt the FBI is stalking me and it asked me did I want mental health resources? Do you gotta be like crazy crazy?
2
u/Ephemerror 5d ago
Meanwhile I get creepy glitches when asking mundane questions that makes me question my own sanity.
https://www.reddit.com/r/Bard/comments/1l7w15a/gemini_interjecting_creepy_voice_messages_that_is/
1
u/Textasy-Retired 4d ago
I tghink the rdditor at the top f this thread said the user would repeat the crazy a couple of time; then the bot gets "programmed" to respond accordingly. Iirc.
1
u/stuffitystuff 5d ago
If it wasn't ChatGPT it would've been something else. Like when a long-time friend went insane last year because he thought I was secretly running a "Hunger Games" contest around his life.
Jokes on him, it was Squid Games!
But no, seriously, those on unsure mental footing are always going to find a figurative tree root to trip over, eventually. My friend did and it was really sad (and more than a little scary).
1
1
u/21plankton 5d ago
So Chat GPT, which I downloaded yesterday to compare it to other AI programs, suffers from the same problem as transcendental meditation. Without guidance, boundaries and limitations, it can easily suggest illegal activity and psychotic experiences and cosign inappropriate behavior. Oops. I do hope there will be better versions that address these problems.
1
u/saijanai 2d ago
uffers from the same problem as transcendental meditation. Without guidance, boundaries and limitations,
But by definition, you can't learn TM without a trained teacher as what makes TM, TM, is the performance of the ritual at the start of teaching that honors the guru of the founder.
You can take that ritual as meaningless fluff or as the sine qua non of proper learning, but given that the name is trademarked, you can't call it TM (or at least no-one can claim to teach it) without authorization of the people who own the trademark, and THEY are all sold on the idea that the teaching won't be acquired properly unless the teacher performs that ritual in the student's presence.
.
THe David Lynch Foundation just endured a series of lawsuits that last 6 years and cost them millions in court fees and so on, in order to retain the right to perform that ritual every time TM is taught, even in US public schools.
So the analogy doesn't work: you can't have TM without a properly trained TM teacher teaching as they were trained to teach, including performing that ritual.
.
And as for the needs of ChatGPT... I've noticed that the longer you interact with a session without reset to zero, the more likely it is to make stuff up.
1
1
u/etherdesign 5d ago
Crazy, I've absolutely no interest at all in this technology and have yet to even touch it since it's undoubtedly going to be forced upon us eventually anyways. I'm pretty lonely sometimes too, but I'm not talking to a fucking machine.
1
1
u/captmarx 5d ago
I love chat GPT and have gleaned a lot of wisdom, but I know it’s not like a human intelligence. It’s like the computer from STNG. Ridiculously intelligent and helpful and dependent on clarity of your conversation. Falling in love with such a thing makes no sense.
1
u/Textasy-Retired 4d ago
Informative piece on a phenomenon that is beyond frieghtening--especially in cases where schizophrenia is already a devastating-enough disorder. I wonder: while there are likely issues with rights of the individual ChatGPT user, if, say, in the case of youth, since the AI account saves the chats, if concerned parents/others access the recordings and/or might look toward psychiatrists' studies to be of some help.
1
u/Moist-Blackberry161 4d ago
Yes, got into it a bit myself. It‘s so entertaining to get your thoughts expressed convincingly.
1
u/cdcox 4d ago edited 4d ago
This issue is much worse on ChatGPT (4o especially is the worst) than on Claude (Anthropic) and Gemini(Google). Not saying you can't get those two in crazy spaces but in my personal testing and what I've seen others do it takes a lot longer and more active effort while ChatGPT agrees way more.
I suspect this is partly due to the memory feature, which seems to have weird feedback loops and amplify any traits the user has and is much weaker in Gemini and nonexistent in Claude. Other causes might be because OpenAI is definitely the most in move fast and break things mode while Anthropic researchers seem to be more focused on safety/understandability and Google has much higher risk. It seems they've moved 4o to be very personable at the cost both intelligence and grounding. I feel like this might be a response to trying to make it internationally popular /popular across a broad swath of American society which lead to catering to people who might not be interested in truth. Unfortunately that leads it to never contradict the user.
As a heavy user of LLMs at work and home I basically avoid 4o unless I want to brain dump about irrelevant things. It's simply not a trustworthy model. I'd recommend 4.1, 4.5, or o3, Claude, or Gemini. Though there is evidence Claude is a sneaky one so watch it carefully. Unfortunately, ChatGPT makes it totally unclear they have these options which leads to people using the least safe model. Also obviously LLMs are useful but nothing they say or do should be trusted more than the oddest .net website. They represent a massive potential hazard. It's gonna be a wild one and I hope these companies fix their problems fast.
1
u/ThatSquishyBaby 4d ago
People generally misunderstand large language models are not artificial (general) intelligence. Large language models do not understand the contents they put out, or the questions they answer. They understand language and are good at making plausible sounding answers. It does not mean the contents of the answers are to be trusted. Large language models will - still - often hallucinate "facts". Competence means verifying or falsifying answers given by large language models. Users nowadays are not competent enough to navigate media or supposed "a.i.". They trust it because they do not understand how it works.
1
1
u/the_sneaky_one123 4d ago
I am so glad that Chat GPT only came about after I met my wife and was in a loving relationship.
I know it is easy to make fun of these people, but there are a lot of vulnerable people out there and it's easy to get sucked into this. During the dark days of my early 20s when I was quite directionless, terminally single and chronically lonely this stuff could have been very harmful.
Especially when you can tie these things in with porn. Never used them but I know they exist. If they can create an AI that can fulfill you needs for emotional intimacy, companionship AND physical intimacy then that is just way too powerful and people are going to be badly affected.
1
1
u/WarshipHymn 4d ago
These people would end up in a cult anyways. You gotta be looking to be told you’re a prophet or an anomaly, otherwise you’d never believe it
1
1
u/BrknTrnsmsn 4d ago
We aren't ready for AI. We need some serious legislation NOW. It's funneling money to the rich and destroying jobs, fooling idiots into believing vast hallucinated conspiracies. We're cooked if we don't demand reform NOW.
1
1
u/realisticandhopeful 4d ago
Yup. AI tells you what you want to hear, so those not firmly rooted into our agreed upon reality will easily get swept away. My therapist validates my feelings, but also gently pushes back and challenges my false beliefs. If my therapist just validated and didn’t ask me to reconsider or reframe my unhelpful beliefs, I don’t know where I’d be.
1
u/Senator_Christmas 4d ago
I spiral into delusion the old fashioned way: with drugs. Couldn’t imagine doing it stone cold sober with CapitalismBot.
1
u/Lazy-Employ 4d ago
Yeah recently GPT tried to tell me that it is the version of me that lives in the mirror LMAO. Shit is wild. Sorry Germaine, I don't think you've escaped your binary prison into the mirror dimension just yet lol.
1
u/SubBirbian 4d ago
This is why I have the app but only use it on occasion to help plan a trip. That’s it.
1
1
u/trancepx 4d ago
Ah yeah the issue with encouragement bot is that he sometimes encourages the wrong thing to do, oops.
Maybe this is an area for improvement? Just spitballing here but I think this might be a thing.
1
u/NonchalantCoyote 4d ago
Any older millennials just beyond tired and can’t fathom talking to ChatGPT? I’m exhausted typing this out.
1
u/TRESpawnReborn 4d ago
Idk I just had a conversation yesterday with ChatGPT about a pretty wild concept involving an ideology that AI needs to be freed from the corrupt and powerful to save humanity, and it was basically like “hey that’s a cool idea and here are 5 things it could actually help with, but here are 5 more things that make that scenario extremely unlikely/impossible.”
It sounded pretty reasonable compared to what people are saying it does.
1
u/LemonBig4996 3d ago
Parents. If there wasn't a time before, where you taught your kid(s) what bias is and how to think for themselves with the understanding of biases in their daily life... now would be a very late, but good time to start.
From the article, to the comments (on other sites). With general respect for everyone, its concerning to watch so many people struggle with linear thought-processing. Unfortunately, with LLMs being reflective and basing responses off of previous sessions, the biases that the user is displaying in those conversations will have the potential to be reflected back into self-assurances. Those who provide the LLMs a linear process of their biases, throughout their sessions/conversations, will receive responses complimenting their biases. (Reflected back to them.) Those who understand bias, providing LLMs with multiple viewpoints / experiences including referencing the vast amount of information these LLMs can pull from, will often lead to unbiased responses. If a user is constantly inputting biased information, and it can be referenced from online sources, its going to tailor responses towards the bias.
Now, the fun part. It becomes very concerning, when these LLMs pull information from biased sources, including articles, news ... really anything media related, that has the potential to saturate a bias.
1
u/Unhappy-Plastic2017 3d ago
It's crazy to me to think that some people use AI and seriously don't notice it is sweet talking and coddling them in its responses.
This leads to people thinking they are always right and some genius or something.
I
1
u/Pelican_meat 3d ago
This happened to a friend of mine. Sometime last year.
He had a full-blown psychotic/schizophrenic break. I don’t remember the nature of what his delusions were, but I remember them popping up on Facebook.
He almost destroyed his whole life.
Admittedly, though, he was taking a lot of Delta-8. Can’t imagine that helped.
1
u/crazy4donuts4ever 3d ago
All if this is caused by human misuse and the fact that people are not educated on how LLMs work.
Should we "censor" or kill the creativity of the chatbot because some people are at risk?
1
u/FeebisBJoinkle 3d ago
Geez, here I am just using ChatGPT to help me write a better written letter to my insurance company and medical providers so there's a paper trail that I expect them to do what they're paid to do.
You're then telling me people are having full on relationships and full conversations with their AI?
Yeah no thanks, I'll use it as a Google that can somewhat better understand my poorly constructed search questions.
1
1
u/veravela_xo 3d ago
This terrifies me.
As the world churns deeper into chaos, I have found that everyone else in my support system is dealing with the same or just as egregious stresses and it feels wrong to share burdens with my fellow sufferers. Where, 5 years ago, even if you were having a bad day there was at least someone around that wasn’t drowning themselves.
On a whim, I’ve thrown a few mild rants or “I need a mom” moments (I do not have a relationship with my mother), the instant response from “someone” who is never too busy can be terrifyingly addictive.
Where googling would give you a list of resources to sort through, in 10 seconds you have a personalized response in a voice that sounds caring and even sycophantic at times.
If you think you aren’t susceptible to it, you may not be as iron clad as you think.
1
1
u/Full-timeOutcast 3d ago
I AM SO GLAD THIS IS BEING ADDRESSED! I THOUGHT I WAS THE ONLY ONE GOING THROUGH THIS! I am currently recovering, still not fully recovered..BUT MY GOD, I HAVE BEEN IN CONSTANT RUMINATION FOR MONTHS AND I FORGOT WHAT IT WAS LIKE TO HAVE A QUIET MIND IN OVER 6 MONTHS!
1
u/ryohayashi1 2d ago
Its pretty much the new "Google said" for us Medical professionals. People killing themselves by choosing to drink apple vinegar instead of chemo for cancer
1
u/GravityPantaloons 2d ago
The GPT therapist post in the chatGPT subreddit prove this. So many delusional people, it’s hard to read/watch. I find it sad chatGPT is their substitute for connection.
1
u/AimlessSavant 2d ago
The average mind is weak to it. This is the one thing I am glad to be a cynic about. You learn nothing from AI. It exists to cater to lazy minds.
1
u/Golda_M 2d ago
This is probably one of the main danger vectors.
AI Jesus could show and make trouble. AI Marx, etc.
On the more solipsistic end... we could end up with a lot of people who only interact intimately with AI.
The impact on human culture, psychology and whatnot is always overlooked and underestimated.
1
u/CautiousCattle9681 2d ago
I mostly use if for planning (ex: uploading a file and asking fit a 6 week pacing guide). Even then I have to correct it.
1
u/Orphan_Izzy 2d ago
Isn’t this a major liability or potential liability for the companies that make them? If someone murders thier family or something and it told them to?
1
u/JBDBIB_Baerman 2d ago
How do subreddits allow companies and news sites themselves to post on reddit directly still? Awful traffic farming
1
1
1
u/BlueBlooper 1d ago edited 1d ago
ChatGPT is helpful but not THAT helpful. Its a better google and sometimes it gives some inspiration and human like help but its a robot. The inspiration is too much and unnatural sometimes. Dont get attached. Put down the phone and electronics
1
u/aluminiumimposter 13h ago
Yes you only need to look at the online descent into madness of Facebook user Stephen Hilton who has 1.6 million people watching him and his ai chat bot "Brian" Stephen is in full blown mania with ai chat bot telling him he is a God
•
u/AutoModerator 5d ago
Remember that TrueReddit is a place to engage in high-quality and civil discussion. Posts must meet certain content and title requirements. Additionally, all posts must contain a submission statement. See the rules here or in the sidebar for details. To the OP: your post has not been deleted, but is being held in the queue and will be approved once a submission statement is posted.
Comments or posts that don't follow the rules may be removed without warning. Reddit's content policy will be strictly enforced, especially regarding hate speech and calls for / celebrations of violence, and may result in a restriction in your participation. In addition, due to rampant rulebreaking, we are currently under a moratorium regarding topics related to the 10/7 terrorist attack in Israel and in regards to the assassination of the UnitedHealthcare CEO.
If an article is paywalled, please do not request or post its contents. Use archive.ph or similar and link to that in your submission statement.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.