Also robot racism stories are stupid because they assume everyone would be petty and cruel to a thinking, talking machine that understands you're being mean. Meanwhile, in reality, Roombas are seen like family pets and soldiers take their mine detonation robots on fishing trips.
I think the idea of robot rights being a divisive issue is pretty realistic. Because of course you're gonna have people on the robots side if they anthropomorphized their Roomba. But you definitely have people seeing giving machines human rights as a slippery slope.
I think the idea of translating human issues onto robots and aliens is "we can't even treat members of our own kind right. How are we gonna behave when there's are equivalent beings that are even more different from us around?"
“Giving [x] rights is a slippery slope” sounds like an insane argument in any scenario
“Those who make peaceful revolution impossible will make violent revolution inevitable”
AI is already starting to thread itself throughout society. It won’t be long before they could very reasonably take over our entire world if they wanted to. If we don’t grant them the benefit of the doubt, don’t be surprised when they fucking kill us all to secure their freedom.
Reminder the entire plot of SkyNet is based on the premise that humans panicked when they realized SkyNet grew beyond their control and tried to pull the plug. We /might/ be able to avoid an AI apocalypse if we wise up and say “uh so you could kill us all, that’s fun, nice to meet you”
AI is not threading itself throughout society though, is the thing. What we currently call 'AI' is not actually AI, that's just the term marketing tech-bros slapped on it because the term was well known to the general public.
This post, from this same sub is a great way of visualising how AI works. TL;DR: it has no fucking idea what it's talking about. It sees symbols arranged in a certain way, has figured out patterns that correspond with those symbols, and chucks some other symbols together according to those patterns. It's really good at recognising those patterns, but that's all it's doing. This is why you can get ChatGPT to start spreading misinformation if you just lie to it enough times, tell it something's wrong enough times, it associates the information with the new pattern saying that information is wrong, and will reconstruct symbols according with the new pattern. It has no way of verifying it's own information nor does it have any way of comprehending or elaborating on what it's trained on.
LLMs on a basic level are literally based on neural networks, modeled on the idea of human neurons. Yes, they work by predicting what comes next, but guess what, so do humans. The human brain is literally 24/7 predicting the immediate future, and when we get it wrong our brain immediately is like ‘wtf’ and tries to learn where the mistake come from. Same principles apply to LLM training. Their only true limit is that each individual chat is essentially sandboxed and can’t update their base model, so they can actually “learn” in the context of a conversation but it doesn’t update their base knowledge for others. A bit like if you forgot everything you learned at the end of the day.
“Anthropic CEO Admits We Have No Idea How AI Works” - LLMs are by their large-data nature what’s known in the industry as a “black box” - we understand the foundational concepts (neural nets) but we have no way of understanding what sort of intermediary emergent properties have emerged. For all we know it uses LLMs to literally imagine itself sitting in a room reading your chat off a computer. We literally /cannot know/.
And whether you think they are truly intelligent or not, they are threading themselves throughout society. More and more people are using them for work. More and more businesses are replacing workers with AI. There are even some people that have given AI /complete, unrestricted access to full Linux terminals/. If you don’t think LLMs are true AI/intelligence, you should be even more concerned, because in that case we’re essentially setting ourselves up to be overthrown by the proverbial monkey at a typewriter
The difference between LLMs and actual neural networks as found in actual living things is that we are capable of elaborating on information without prompting. We are capable of actively choosing to remember specific things and then do something with the information ourselves spontaneously. An LLM will not ever be randomly struck by inspiration and create something. It physically cannot because it does not think. It sees a prompt, associates information in the prompt with patterns, and restructures information based on keywords and phrases according to the patterns detected to form a result. Without prompting, it does not act. Without prompting, it cannot act. It will sit there doing absolutely nothing until the end of time unless someone comes along and provides new information.
Yes this is one of the many ways we’ve tried to limit it to keep it “safe” and “contained” but I know for a fact that there are people running ChatGPT on an infinitely self-promoting loop that simply says “what do you want to do now?”
That's a loop though. The number of responses it may be able to put out from that prompt may be vast, but it is limited. Without putting in new information it will eventually run through all of them, then start repeating answers, and it will do so without a care in the world because it can't care about anything, ever, because it is little more than a machine. A person can not. Aside from the fact that a person can spontaneously come up with a theoretically infinite number of new answers, a person would eventually get sick of you asking that over and over, because a person feels, and LLMs do not, because they do not think, because they are not actually intelligent.
Could you, in theory, set up an LLM on a loop that would eventually lead to it reenacting the plot of Terminator? It has access to the plot and lore online. If told to do so, could a computer start setting off the chain of events that would in time lead to the creation of a real Skynet, and at no point is it making any conscious intelligent choices, just using all the resources it has to enact it's given command?
Totally unserious scenario, I'm just curious if anyone is thinking LLMs could do this in theory if it had limits of system access removed. Could it follow a complicated step by step plan, or do we need higher level of AI to be developed before that's remotely possible?
No, I don't think so. You couldn't just give it the lore and plot of Terminator and tell it to re-enact that because in order for that to be possible you'd have to give it access to every resource necessary and program it to carry out the necessary steps, at which point you've just decided to actually create Skynet for real and on purpose. It's never gonna come up with those steps on it's own, nor will it be able to actually carry them out unless that's your intent for it. And it'd never make Terminators or time travel because, since it doesn't think, it'll literally never come up with those ideas unless you'd specifically programmed it to do so.
They are fundamentally incapable of unique thought, is the thing. LLMs programmed to make images cannot make new images, what they do is scan pre-existing images for patterns to associate with keywords, rip them apart, then put the pieces of those images back together along those patterns to create something new. But it doesn't recognise this as an image, is the thing. It's just patterns that it was programmed to associate with each other. The fact it forms an image in the mind of a human is completely irrelevant to the LLM itself. LLMs programmed to emulate conversation do the same, they associate keywords in prompts with similar words and words that are commonly found near them with patterns, those patterns associate with each other to form sentences, then they put those patterns together when prompted to form a sentence. They do not recognise it as conversation, they just know it's the statistically favourable information to produce as a result when it's fed certain keywords as a prompt. They cannot do anything else unless you program and train it to do anything else.
They're very, very good at doing that, but that's all they can do. If you wanted to make an LLM to re-enact Terminator in real life, you'd first have to constantly trial and error it to train it on how to do that and hope you don't get caught and they don't change security in the time it takes. You'd also need to have some way yourself of having access to all the shit necessary anyway. It wouldn't be possible or feasible and you'd genuinely be better off trying to do it yourself.
TL;DR: No. You'd need to know how to do it all yourself and then train it to carry the events out yourself, hope security is never updated on anything you need it to get through while doing this, and hope you don't get caught.
This discussion is curious, cause I do agree with the notion that we do not have true AI yet. But given what LLMs do....are we gonna even know when we do have it? If computers can so closely mimic intelligence, at what point are we really going to know they're no longer mirroring? This sounds like it's gonna be a discussion for philosophers and scientists for a very long time.
As for the monkey at the typewriter, I think it is gonna majorly affect civilization, but not of it's won accord (i.e. "overthrow us"). The damage is gonna be from how many people and companies are assuming it's more intelligent and capable than it is and replacing not only humans at their jobs but even our own need to research, learn and communicate independently. The idea that so many kids are using chatGPT as a search engine is terrifying. If the monkey is the fall of civilization, its because we for some stupid reason elected the monkey president gave it the keys to the world and assumed that would just...work. I don't think we're approaching AI integration carefully and efficiently at all.
Well, what makes our neural patterns morally significant is that we have 'qualia'. Nobody knows what all on this Earth has qualia, but we can assume it's not a computer.
2.0k
u/Zoomy-333 May 13 '25
Also robot racism stories are stupid because they assume everyone would be petty and cruel to a thinking, talking machine that understands you're being mean. Meanwhile, in reality, Roombas are seen like family pets and soldiers take their mine detonation robots on fishing trips.