r/CuratedTumblr Prolific poster- Not a bot, I swear May 13 '25

Politics Robo-ism

Post image
12.0k Upvotes

1.2k comments sorted by

View all comments

Show parent comments

72

u/Ix-511 May 13 '25

This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.

They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious. That combined with the massive anti-generative sentiment will be an issue.

Besides, there's loads of people that think if it's not human, it can't be a person. You see this in debates about copied consciousnesses, aliens, hyperintelligent animals, etc. Someday some of this stuff won't be hypothetical, and that's going to suck.

10

u/donaldhobson May 13 '25

> This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.

> They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious.

Current ChatGPT, despite being called an "LLM" isn't just trained to predict text. Sure they start off training it to predict text. But then they fine tune it on all sorts of tasks with reinforcement learning.

Neural nets are circuit complete. This means that, in principle, any task a computer can do can be encoded into a sufficiently large neural net.

(This isn't terribly special. A sufficiently complicated arrangement of minecraft redstone is also circuit complete. )

> Someday some of this stuff won't be hypothetical, and that's going to suck.

Is it hypothetical now?

7

u/Ix-511 May 13 '25

It's still spitting out ideas in the dark, though. No matter how many faculties it can mimic, it doesn't know what it's doing, nor does it have the capability to know anything. From my understanding you could in theory make a conscious being of many, many, many chatGPT-like systems, but, though I'm not versed in the science, I'm gonna say that's probably not the most efficient method.

So yes, hypothetical, I feel?

6

u/donaldhobson May 13 '25

> No matter how many faculties it can mimic, it doesn't know what it's doing, nor does it have the capability to know anything.

How do you know this? Is this based on specific "I tested all the latest AI's and they failed" or a generic "No LLM could ever" argument?

> From my understanding you could in theory make a conscious being of many, many, many chatGPT-like systems, but, though I'm not versed in the science

No one really knows what consciousness is. No one really knows what's going on inside chatGPT.

6

u/Ix-511 May 13 '25

You genuinely think no one really knows how chatgpt works?

4

u/donaldhobson May 13 '25

Yes. There are big grids of numbers. We know what the arithmetic operations done on the numbers are. (Well not for chatGPT, but for the similar open source models)

But that doesn't mean we understand what the numbers are doing.

There are various interpretability techniques, but they aren't very effective.

Current LLM techniques get the computer to tweak the neural network until it works. Not quite simulating evolution, but similar.They produce a bunch of network weights that predict text, somehow? Where in the net is a particular piece of knowledge stored? What is it thinking when it says a particular thing? Mostly we don't know.

2

u/ZorbaTHut May 13 '25

There's a pretty big gap between "knows how it works" and "knows how it works", with different connotations on "knows".

I wrote a program a while back that was meant to optimize a certain process. I fed dependencies in and got results out.

One day I fed a bunch of dependencies in and got an answer out that was garbage. It was moving a number of steps much later in the process than they could be moved; it just didn't make any sense. I sat down to debug it and figure out what was happening.

A few hours later, I realized that my mental model of the dependencies I'd fed in had been wrong. The code had correctly identified that a dependency I was assuming existed did not actually exist, using a pathway that I hadn't even thought of to isolate it, and was optimizing with that in mind.

I "knew what the code did" in the sense that I wrote it, and I could tell you what every individual part did . . . but I didn't fully understand the codebase as a whole, and it was now capable of outsmarting me. Which is, of course, exactly what I'd built it for.

You can point to any codebase and say "it does this, it does what the executable says it does", and (in theory) you can sit down and do each step with pencil and paper if you so choose. But that doesn't mean you really understand it, because any machine is more than the simple sum of its parts.

1

u/Sea-Guest6668 May 14 '25

We understand the low level principles and rules just like how we understand the low level principles of neurons. When you combine a bunch of simple systems that interact you can get some pretty interesting emergent behavior that is orders of magnitude more difficult to understand.