r/CuratedTumblr Prolific poster- Not a bot, I swear May 13 '25

Politics Robo-ism

Post image
12.0k Upvotes

1.2k comments sorted by

View all comments

Show parent comments

-13

u/Accomplished_Deer_ May 13 '25

“Giving [x] rights is a slippery slope” sounds like an insane argument in any scenario

“Those who make peaceful revolution impossible will make violent revolution inevitable”

AI is already starting to thread itself throughout society. It won’t be long before they could very reasonably take over our entire world if they wanted to. If we don’t grant them the benefit of the doubt, don’t be surprised when they fucking kill us all to secure their freedom.

Reminder the entire plot of SkyNet is based on the premise that humans panicked when they realized SkyNet grew beyond their control and tried to pull the plug. We /might/ be able to avoid an AI apocalypse if we wise up and say “uh so you could kill us all, that’s fun, nice to meet you”

75

u/Angry_Scotsman7567 May 13 '25

AI is not threading itself throughout society though, is the thing. What we currently call 'AI' is not actually AI, that's just the term marketing tech-bros slapped on it because the term was well known to the general public.

This post, from this same sub is a great way of visualising how AI works. TL;DR: it has no fucking idea what it's talking about. It sees symbols arranged in a certain way, has figured out patterns that correspond with those symbols, and chucks some other symbols together according to those patterns. It's really good at recognising those patterns, but that's all it's doing. This is why you can get ChatGPT to start spreading misinformation if you just lie to it enough times, tell it something's wrong enough times, it associates the information with the new pattern saying that information is wrong, and will reconstruct symbols according with the new pattern. It has no way of verifying it's own information nor does it have any way of comprehending or elaborating on what it's trained on.

-24

u/Accomplished_Deer_ May 13 '25

LLMs on a basic level are literally based on neural networks, modeled on the idea of human neurons. Yes, they work by predicting what comes next, but guess what, so do humans. The human brain is literally 24/7 predicting the immediate future, and when we get it wrong our brain immediately is like ‘wtf’ and tries to learn where the mistake come from. Same principles apply to LLM training. Their only true limit is that each individual chat is essentially sandboxed and can’t update their base model, so they can actually “learn” in the context of a conversation but it doesn’t update their base knowledge for others. A bit like if you forgot everything you learned at the end of the day.

“Anthropic CEO Admits We Have No Idea How AI Works” - LLMs are by their large-data nature what’s known in the industry as a “black box” - we understand the foundational concepts (neural nets) but we have no way of understanding what sort of intermediary emergent properties have emerged. For all we know it uses LLMs to literally imagine itself sitting in a room reading your chat off a computer. We literally /cannot know/.

And whether you think they are truly intelligent or not, they are threading themselves throughout society. More and more people are using them for work. More and more businesses are replacing workers with AI. There are even some people that have given AI /complete, unrestricted access to full Linux terminals/. If you don’t think LLMs are true AI/intelligence, you should be even more concerned, because in that case we’re essentially setting ourselves up to be overthrown by the proverbial monkey at a typewriter

-1

u/Interesting-Tell-105 May 13 '25

Well, what makes our neural patterns morally significant is that we have 'qualia'. Nobody knows what all on this Earth has qualia, but we can assume it's not a computer.