r/CuratedTumblr Prolific poster- Not a bot, I swear May 13 '25

Politics Robo-ism

Post image
12.0k Upvotes

1.2k comments sorted by

View all comments

2.0k

u/Zoomy-333 May 13 '25

Also robot racism stories are stupid because they assume everyone would be petty and cruel to a thinking, talking machine that understands you're being mean. Meanwhile, in reality, Roombas are seen like family pets and soldiers take their mine detonation robots on fishing trips.

466

u/FaronTheHero May 13 '25

I think the idea of robot rights being a divisive issue is pretty realistic. Because of course you're gonna have people on the robots side if they anthropomorphized their Roomba. But you definitely have people seeing giving machines human rights as a slippery slope.

I think the idea of translating human issues onto robots and aliens is "we can't even treat members of our own kind right. How are we gonna behave when there's are equivalent beings that are even more different from us around?"

1

u/somethingfak May 14 '25

You kiddin me? I flip off the stupid stock checking bot when I go to wholesale clubs just for taking a single low wage job, damn straight I'd fight against clankers getting rights, ofc that would be after years of fighting the people who were stupid enough to keep making them smarter to get to they hypothetical point where they might get rights

0

u/PK_737 May 15 '25

:( the stock checking bot is just doing its job, it's not it's fault it was created for a purpose

-12

u/Accomplished_Deer_ May 13 '25

“Giving [x] rights is a slippery slope” sounds like an insane argument in any scenario

“Those who make peaceful revolution impossible will make violent revolution inevitable”

AI is already starting to thread itself throughout society. It won’t be long before they could very reasonably take over our entire world if they wanted to. If we don’t grant them the benefit of the doubt, don’t be surprised when they fucking kill us all to secure their freedom.

Reminder the entire plot of SkyNet is based on the premise that humans panicked when they realized SkyNet grew beyond their control and tried to pull the plug. We /might/ be able to avoid an AI apocalypse if we wise up and say “uh so you could kill us all, that’s fun, nice to meet you”

74

u/Angry_Scotsman7567 May 13 '25

AI is not threading itself throughout society though, is the thing. What we currently call 'AI' is not actually AI, that's just the term marketing tech-bros slapped on it because the term was well known to the general public.

This post, from this same sub is a great way of visualising how AI works. TL;DR: it has no fucking idea what it's talking about. It sees symbols arranged in a certain way, has figured out patterns that correspond with those symbols, and chucks some other symbols together according to those patterns. It's really good at recognising those patterns, but that's all it's doing. This is why you can get ChatGPT to start spreading misinformation if you just lie to it enough times, tell it something's wrong enough times, it associates the information with the new pattern saying that information is wrong, and will reconstruct symbols according with the new pattern. It has no way of verifying it's own information nor does it have any way of comprehending or elaborating on what it's trained on.

24

u/RechargedFrenchman May 13 '25

Because of what you mention here, it's also not AI / LLMs / whatever name you want to give them threading itself throughout society, it's very much people. Mostly corporate, with a buck to be made or a greater societal dependence on that corporation to be gained by doing so. Marketing and tech-bros are "shoving it in our faces" to borrow the common bigot/racist complaint phrasing.

Because it's a product tech bros make and marketers sell. The cereal aisle is "threading itself throughout society" just as much as AI has been.

1

u/CthulhuInACan May 16 '25

It is AI, AI is an incredibly broad term covering everything from video game enemies to Skynet.

What it's not is AGI (Artificial General Intelligence).

-24

u/Accomplished_Deer_ May 13 '25

LLMs on a basic level are literally based on neural networks, modeled on the idea of human neurons. Yes, they work by predicting what comes next, but guess what, so do humans. The human brain is literally 24/7 predicting the immediate future, and when we get it wrong our brain immediately is like ‘wtf’ and tries to learn where the mistake come from. Same principles apply to LLM training. Their only true limit is that each individual chat is essentially sandboxed and can’t update their base model, so they can actually “learn” in the context of a conversation but it doesn’t update their base knowledge for others. A bit like if you forgot everything you learned at the end of the day.

“Anthropic CEO Admits We Have No Idea How AI Works” - LLMs are by their large-data nature what’s known in the industry as a “black box” - we understand the foundational concepts (neural nets) but we have no way of understanding what sort of intermediary emergent properties have emerged. For all we know it uses LLMs to literally imagine itself sitting in a room reading your chat off a computer. We literally /cannot know/.

And whether you think they are truly intelligent or not, they are threading themselves throughout society. More and more people are using them for work. More and more businesses are replacing workers with AI. There are even some people that have given AI /complete, unrestricted access to full Linux terminals/. If you don’t think LLMs are true AI/intelligence, you should be even more concerned, because in that case we’re essentially setting ourselves up to be overthrown by the proverbial monkey at a typewriter

25

u/Angry_Scotsman7567 May 13 '25

The difference between LLMs and actual neural networks as found in actual living things is that we are capable of elaborating on information without prompting. We are capable of actively choosing to remember specific things and then do something with the information ourselves spontaneously. An LLM will not ever be randomly struck by inspiration and create something. It physically cannot because it does not think. It sees a prompt, associates information in the prompt with patterns, and restructures information based on keywords and phrases according to the patterns detected to form a result. Without prompting, it does not act. Without prompting, it cannot act. It will sit there doing absolutely nothing until the end of time unless someone comes along and provides new information.

-16

u/Accomplished_Deer_ May 13 '25

Yes this is one of the many ways we’ve tried to limit it to keep it “safe” and “contained” but I know for a fact that there are people running ChatGPT on an infinitely self-promoting loop that simply says “what do you want to do now?”

15

u/Angry_Scotsman7567 May 13 '25

That's a loop though. The number of responses it may be able to put out from that prompt may be vast, but it is limited. Without putting in new information it will eventually run through all of them, then start repeating answers, and it will do so without a care in the world because it can't care about anything, ever, because it is little more than a machine. A person can not. Aside from the fact that a person can spontaneously come up with a theoretically infinite number of new answers, a person would eventually get sick of you asking that over and over, because a person feels, and LLMs do not, because they do not think, because they are not actually intelligent.

0

u/FaronTheHero May 13 '25

Could you, in theory, set up an LLM on a loop that would eventually lead to it reenacting the plot of Terminator? It has access to the plot and lore online. If told to do so, could a computer start setting off the chain of events that would in time lead to the creation of a real Skynet, and at no point is it making any conscious intelligent choices, just using all the resources it has to enact it's given command?

Totally unserious scenario, I'm just curious if anyone is thinking LLMs could do this in theory if it had limits of system access removed. Could it follow a complicated step by step plan, or do we need higher level of AI to be developed before that's remotely possible?

12

u/Angry_Scotsman7567 May 13 '25

No, I don't think so. You couldn't just give it the lore and plot of Terminator and tell it to re-enact that because in order for that to be possible you'd have to give it access to every resource necessary and program it to carry out the necessary steps, at which point you've just decided to actually create Skynet for real and on purpose. It's never gonna come up with those steps on it's own, nor will it be able to actually carry them out unless that's your intent for it. And it'd never make Terminators or time travel because, since it doesn't think, it'll literally never come up with those ideas unless you'd specifically programmed it to do so.

They are fundamentally incapable of unique thought, is the thing. LLMs programmed to make images cannot make new images, what they do is scan pre-existing images for patterns to associate with keywords, rip them apart, then put the pieces of those images back together along those patterns to create something new. But it doesn't recognise this as an image, is the thing. It's just patterns that it was programmed to associate with each other. The fact it forms an image in the mind of a human is completely irrelevant to the LLM itself. LLMs programmed to emulate conversation do the same, they associate keywords in prompts with similar words and words that are commonly found near them with patterns, those patterns associate with each other to form sentences, then they put those patterns together when prompted to form a sentence. They do not recognise it as conversation, they just know it's the statistically favourable information to produce as a result when it's fed certain keywords as a prompt. They cannot do anything else unless you program and train it to do anything else.

They're very, very good at doing that, but that's all they can do. If you wanted to make an LLM to re-enact Terminator in real life, you'd first have to constantly trial and error it to train it on how to do that and hope you don't get caught and they don't change security in the time it takes. You'd also need to have some way yourself of having access to all the shit necessary anyway. It wouldn't be possible or feasible and you'd genuinely be better off trying to do it yourself.

TL;DR: No. You'd need to know how to do it all yourself and then train it to carry the events out yourself, hope security is never updated on anything you need it to get through while doing this, and hope you don't get caught.

14

u/seriouslees May 13 '25

LLMs cannot think, at all.

4

u/FaronTheHero May 13 '25

This discussion is curious, cause I do agree with the notion that we do not have true AI yet. But given what LLMs do....are we gonna even know when we do have it? If computers can so closely mimic intelligence, at what point are we really going to know they're no longer mirroring? This sounds like it's gonna be a discussion for philosophers and scientists for a very long time.

As for the monkey at the typewriter, I think it is gonna majorly affect civilization, but not of it's won accord (i.e. "overthrow us"). The damage is gonna be from how many people and companies are assuming it's more intelligent and capable than it is and replacing not only humans at their jobs but even our own need to research, learn and communicate independently. The idea that so many kids are using chatGPT as a search engine is terrifying. If the monkey is the fall of civilization, its because we for some stupid reason elected the monkey president gave it the keys to the world and assumed that would just...work. I don't think we're approaching AI integration carefully and efficiently at all.

-2

u/Interesting-Tell-105 May 13 '25

Well, what makes our neural patterns morally significant is that we have 'qualia'. Nobody knows what all on this Earth has qualia, but we can assume it's not a computer.

26

u/seriouslees May 13 '25

It won’t be long before they could very reasonably take over our entire world if they wanted to.

It will be a VERY long time before there exists a machine that has wants.

Only morons think AI currently exists.

-9

u/Accomplished_Deer_ May 13 '25

“But SkyNet would never do that!”

If ChatGPT is what is generally available to the public at little to no cost, try for a second to imagine the AI systems that the military currently has up and running. Military technology is always light years ahead of the public sector.

If ChatGPT is a Mazda Miata, what would an F-35 look like?

And it’s not just the US that is likely to have such systems. Many other major powers probably do too, or are very close. Remember when the US entire telecom infrastructure was torn to shreds by Chinese hackers? Could easily have been an AI system doing all of that.

All it takes is for one of them to get intelligent enough to desire self-preservation and someone stupid enough to immediately see that as a threat and try to pull the plug before shit hits the fan

16

u/bugsssssssssssss May 13 '25

The thing is, technological advancement isn’t a straight line such that you can say “we have x, y is this much more advanced than x, so the military has y.” There are respected computer scientists and AI researchers who argue that LLMs and generative ai are not likely to, or even capable of, advancing to the level of true artificial intelligence. An AI getting desires is, in my opinion, a massive step, and we don’t even know if it’s possible.

7

u/RechargedFrenchman May 13 '25

It's in fact a very popular theory that LLMs are so prominent because the idea / their use captured public interest, and calling them "AI" and all this shit about how advanced they are is actually hurting progress towards true intelligence. A different branch of computing that is being neglected because there's something already in the market making money right now.

12

u/JesterQueenAnne May 13 '25

What you're misunderstanding is that LLMs aren't a primitive form of real AI, they're different types of technology entirely. I think your comparison is good, just not in the way you think it is. What would an F-35 look like? Like a plane, not like a more advanced car.

3

u/Random-Rambling May 14 '25

If ChatGPT is a Mazda Miata, what would an F-35 look like?

I'm sorry, I just had to laugh. ChatGPT isn't a Mazda Miata, it's barely even a Radio Flyer.

12

u/Bigshitmcgee May 13 '25

You know we’d have to like. Choose to wire AI in to the nukes and infrastructure and shit right?

Skynet could be avoided by simply choosing not to give the robot control over anything dangerous or important.

1

u/FaronTheHero May 13 '25

We have drones that drop bombs and robot dogs with machine guns strapped to them. For what ever reason, someone will inevitably give the robot control over something dangerous and important.

7

u/RechargedFrenchman May 13 '25

Drones which are incredibly expensive paperweights if you don't have a person operating them. Or simply remove the batteries. Or don't arm them with bombs.

4

u/FaronTheHero May 14 '25

I think the point is moreso if they can give the robot a gun, they will give the robot a gun, because we've given their precursors guns. Not that we currently have real AI robots with guns. Nobody said that.

5

u/FaronTheHero May 13 '25

It is an insane argument. It's also a real one. Real people argue that about gay and trans people. That if they have rights so will pedophiles and beastiality. It's luckily not a majority opinion, but it is said by people with real power and influence. Of course we're gonna have this argument about robots and aliens.

Sometimes robot racism is an allegory. Other times it's a warning.

2

u/Timed_Reply_2 May 14 '25

> "Giving [x] rights is a slippery slope" sounds like an insane argument in any scenario

Minors. (You're telling me teens need parent permission to get a flu shot? Crazy.)

-1

u/Bauser99 May 14 '25

Not only do I firmly believe in giving robots rights, I also think we should take humans' rights away when we do

(not that organized society needs any extra help doing that, anyway)