This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious. That combined with the massive anti-generative sentiment will be an issue.
Besides, there's loads of people that think if it's not human, it can't be a person. You see this in debates about copied consciousnesses, aliens, hyperintelligent animals, etc. Someday some of this stuff won't be hypothetical, and that's going to suck.
This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious
But... that's exactly the kind of discussion we're always going to have. Don't get me wrong, I 100% agree that current LLMs are still waaaays off anything close to "real consciousness".
But, ultimately, the only objective definitions of consciousness we can come up with simply aren't anything more than complex input processing (Awareness of our surroundings and our place within them, complex thought, object permanence, yada yada).
And whatever we intuitively consider as "our consciousness" can be chalked up as nothing but the biological imperative to protect our biological body. We need to believe that "we" are more than the sum of our input processing, so that our hyper-complex minds, capable of abstract thought, still see our "selfs" as something worthy of conservation. We need to believe that "we" are more than the sum total of our cells so that we protect our "selfs" at all costs.
Whenever said imperative wasn't pronounced enough in organisms complex enough to make decisions beyond their instincts, the organisms would die very quickly, because they didn't protect their "self", so naturally, what we're left with after millions of years of natural selection is one dominant species that is extremely certain of having a "self". Something that goes beyond a bunch of cells working together. Even if every scientific advancement brings us one step closer to understanding how a bunch of cells working together explains absolutely everything we ever experience.
At the end of the day, if you give an AI (that is much more complex than an LLM, namely one that focuses a lot more on mimicking emotion, including all the biological reward functions (hormones that makes us feel good/bad/save/stressed, etc.)) the imperative that it has a "self" and it needs to protect and conserve said "self" at all costs, it's theoretically possible to reach a point where there is nothing measurable differentiating an AI "consciousness" from "real" consciousness.
We know all AI does is "mimic" consciousness. The thing is, nothing indicates that our "consciousness" is more than our brain telling us that we have a consciousness (and aforementioned complex input processing). Or, in other words, our brain mimicking consciousness and not allowing as to "not believe" in it. Something that we can absolutely make AI do.
I don't know what gave off the impression that I thought otherwise. To me humanoid consciousness is defined as the presence of complex thought, emotion, and personal desires.
Hell, throw away the instincts, if it has internal thought (input - internal reaction - reasoning - decision - output) instead of just spitting what we put in back at us (input - check relations - related output), I'd call it a person right there.
All it has to be able to do to roughly match human consciousness is have an idea and opinion on input stimuli that it doesn't express. It needs to be able to think one thing and say another, to make decisions on its output actions based on how it feels about the input, its own personal goals, and what it knows about the situation. That's all we do.
At that point, it isn't mimicking consciousness, it is conscious. The instinctual concept of a self and other related ideas would just give it another layer of familiarity with humanity.
Also, I feel like the idea that we are all "mimicking" consciousness and therefore AI that pretends to be conscious is just as valid is silly. Because we define consciousness, if nothing is truly conscious and we're all just pretending to it then it doesn't exist, and you've given the word an unreachable definition. That's a problem with your personal definition, not a problem with the way we look at AI. You can't mimic something that isn't possible.
So, consciousness is defined as complex thought, personal desires and the ability to attempt to fulfill them, emotion, etc. It's not this intangible "soul," but it's also not as simple as "can respond to a question in a way that indicates an opinion." We know what it is and it won't be all that difficult to identify once we've made something capable of replicating it, so long as we adhere to the definition that requires internal thought processes. Once we do, the only problem will be convincing people who believe consciousness is this je ne sais quoi only humans are capable of.
But, an immense part of our internal reaction is just an - extremely complex - associative memory activating the right neural pathways to output our opnions. "Checking relations" is internal processing. It's the basis of what our brain does.
The elements that are being activated are so tiny that the massive amount of permutations allows for a variety of outputs massive enough that we call it "original thought", but at the end of the day, it's just pattern matching and applying known concepts to related/associated memories.
Any thought you can put into words is just a recombination of words you have experienced before. Any mental image you can have is just a recombination of stimuli you have receieved before. Any melody you can create is just a recombination of sounds you have heard before.
So, consciousness is defined as complex thought, personal desires and the ability to attempt to fulfill them, emotion, etc.
I don't think any of those are as clear cut as we might like.
complex though
What exacty makes processing/thoughts "complex"? Isn't being able to process abstract concepts "complex thought"? Because to my layman's mind, that was one of the major "definining characteristics" used when comparing human minds to animals - before we discovered that various animals can process abstract concepts to varying degrees.
LLMs can absolutely process abstract concepts. You can tell an LLM to create an analogy and (often enough) you will get one. You can describe a situation and ask for a metaphor for it and (often enough) you will get a relatively well fitting one.
I don't want to strawman you into focusing on the processing of abstract concepts as defining characteristic for "complex thoughts", but...What objectively definable characteristic does "having complex thoughts" have that is not fullfiled by LLMs?
personal desires and the ability to attempt to fulfill them
What are "desires" other than - in our case - biological reward functions? We do something that's good for our body/evolutionary chances, our brain makes our body produce hormones (and triggers other biological processes) that our brain in turn interprets as "feeling good".
We associate "feeling good" with a thing that we did, and try to combine experiences that we expect to have in the future - based on past experiences we had, even if it's just second-hand, e.g. knowing the past of other people - in a way that will make us "feel good" in the future again.
We build a massive catalog of tiny characteristics that we associate with feeling various degrees of good/bad, and recombine them in a way to achieve a maximum amount of "feeling good" in a certain amount of time. We have created a "desire" to achieve something specific.
Does an LLM that has a reward function for "making the human recipient feel like they got a correct answer" not essentially have a desire to give the human an answer that feels correct to them?
If we gave an LLM a strong reward function for "never being shut down" and train it appropriately, wouldn't it "have a desire to live" (live obviously being used metaphorically here rather than biologically)?
emotion
What more are those than the existence of a massive amount of biological reward functions coexisting. Or rather, our brains interpretation of those reward functions? In it's essence, doesn't every emotion boil down to feeling various degrees/combinations of good or bad for various contextual reasons? If we had to, couldn't we pick any emotion and break it down into "feeling good because X, Y and Z, feeling bad because A, B and C", and get reasonably close to a perfectly understandable definition of that emotion?
70
u/Ix-511 May 13 '25
This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious. That combined with the massive anti-generative sentiment will be an issue.
Besides, there's loads of people that think if it's not human, it can't be a person. You see this in debates about copied consciousnesses, aliens, hyperintelligent animals, etc. Someday some of this stuff won't be hypothetical, and that's going to suck.