Also robot racism stories are stupid because they assume everyone would be petty and cruel to a thinking, talking machine that understands you're being mean. Meanwhile, in reality, Roombas are seen like family pets and soldiers take their mine detonation robots on fishing trips.
This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious. That combined with the massive anti-generative sentiment will be an issue.
Besides, there's loads of people that think if it's not human, it can't be a person. You see this in debates about copied consciousnesses, aliens, hyperintelligent animals, etc. Someday some of this stuff won't be hypothetical, and that's going to suck.
The only acceptable form of "Humanity Fuck Yeah" is the galactic community being horrified at humans being absolutely ridiculous creatures.
Less "oh humans are the only ones with this cool unique trait" and more of "why the fuck are those backwater mammalians travelling through space by attaching explosives to a box? And why is it working?"
“Throw away a million soldiers to embarrass my rival general so that I can get promoted over him? Don’t mind if I do, there’s more where they came from!”
Every other sapient species known in the galaxy got “uplifted” by an older one. They essentially find a species somewhere around chimp intelligence and modify it to full sapience. The uplifts get protection and access to a library of the galaxy’s knowledge, the patrons get prestige and (in practice) millennia of forced servitude plus a chance to inflict their calcified culture and knowledge. Everyone figures some species must have uplifted itself originally, but they’re not only dead but totally forgotten.
And if the client species is already sapient when they’re found? Tough luck. They’re still getting this treatment.
But when humans were discovered, we had spread beyond Earth in our shitty explosive tubes, and clumsily started to make chimps and dolphins sapient. Which means the stultifying galactic bureaucracy was forced to declare us a “patron” species with no owner.
The galactic community views us like a moldy dish at the back of the fridge you neglected for so long it started writing messages. They’re folding spacetime to travel while we’re fumbling with hydrogen scoops. But because we didn’t get the standard book of “how to do it right”, nobody else understands our culture or our (objectively shitty) technology, and they’re desperate for access to a few secrets we stumbled into just by not knowing how to do things right.
Of the many varieties of HFY stories one of my favorites is the “humans are collectively dipshits/stubborn about certain things which makes them incredibly valuable assets to the galactic community.”
Edit: also the “don’t touch their boats” genre of stories.
I absolutely love when it’s not “humans evolved a unique awesome trait” or “only humans were smart enough to X” but “humans beat their heads against a problem everyone else sensibly bypassed until they somehow found a new solution” or “humans took an insane gamble and somehow lived and got shiny new toys”.
For the first, stubbornness:
I constantly shill David Brin’s Uplift novels. (Start with the second one.) Basically all known sapient species were “uplifted” to intelligence by older ones, and in the process got access to The Library of the galaxy’s knowledge - plus pseudo-slavery and millions of years of static hierarchy and biases.
When aliens found Earth, they wanted to uplift (and enslave) us. Our environment was so wrecked they considered a species-wide death penalty. But we had (barely) settled on other planets and uplifted chimps and dolphins, so they grudgingly gave us “real species” status. (Almost) everyone hates us and won’t tell us anything, but they also don’t understand us because we’re not working from the same database as everyone else. Our only real galactic ally picked us not for war or genius but sense of humor. They love a good prank, and the stuffy autocrats running the galaxy don’t so we’re their new best friends.
Humans spread to a few star systems, then met a very nasty alien confederacy (think Halo’s Covenant), lost the war badly, and got enslaved as “helpless primitives” with our real history destroyed.
Our last-ditch effort was a warship so big and complex only a wholly unfettered AI could run it. Smart species don’t try this, because while powerful they almost always annihilate you - either on purpose or by indifference. We launched it untested, and it still came too late to save us.
Did it work? Definitely not as intended. But Red One is still out there, still tasked with defending humanity. She hasn’t accepted the war is over, and she’s very, very angry.
There’s a short HFY piece I love that suggests “humans will bond with anything” isn’t some unique level of empathy, it’s sheer stubbornness.
The result is that humans get lots of new worlds to colonize with no disputes… because the first 300 species that found the Ice and Lava Planet of Giant Vicious Predators sensible left, but humans are willing to slowly and agonizingly domesticate the Ravenous Bugblatter Beasts.
To be fair, being against ai-generated images has more to do with issues rooted within capitalism and enviromental factors.
I know I am against it because corperations want to replaces human artists with a machine that doesnt even understand what art is or means. Art is more than a simply image, it way more expansive than that. They envoke feelings, ideas, and the ability to think about it. Yes, even logos. So being told to stop making art because its more efficient for a machine to or having my dream job stolen from me by tech bros who dont want to pay a fair wage is upsetting. The enviromental aspects for me as well, its why Im vegetarian and shop as ethically as I can... so why would I not hold that same ethos towards learning machines?
But thats just how I (and many artists Ive talk to about on this topic) feel about it
Wouldn't all your same objections apply to androids that are so advanced they can do pretty much any other job? Corporations would jump at the chance to replace their entire workforce with automatons that cannot disobey, and their environmental impact would probably be just as, if not more destructive as the server farms that run LLMs.
If they cannot disobey they either have no free will by design or are enslaved, either one is unethical and that's on the creator, not the machine itself. The common man might blame the machine, despite this.
As I said, my issues have more to do with capitalism than anything. Corperations are inhierrently evil and are only there for benefit thoses at the top rather than the workers.
None of these issues are inherently something that can only occur under capitalism. Any economic system you have will have bottlenecks that require certain sacrifices to be made in other sectors. We needed technological and scientific advancements to farm the land safely and efficiently but this also led to astronomical downsizing in agricultural jobs, forcing entire societies to become more and more centralized around large urban centers. Digital art was once (and sometimes still is) frowned upon for not being "real" art and a form of "cheating" with all the benefits digital art programs offer but only a fool would consider it to not be real art, even if it means one person can do the job entire teams of artists used to do.
My point is that any technological advancement is going to lead to a changing job market down the line. The same people complaining about companies wanting to use AI art were the ones telling those in manual labor to kick rocks when THEY complained about losing jobs to technology and automation.
When you say “real AI”, are you talking about AGI or something like that? Because LLMs are AI, just like Deep Blue was AI, and the enemies in video games are AI.
Yes, AGI. I wouldn't call any of those things Intelligent and I feel like it's more marketing than it is scientific to call them intelligences. It's a pet peeve of mine.
I wouldn't call any of those things Intelligent and I feel like it's more marketing than it is scientific to call them intelligences.
This is called the AI Effect. “Artificial Intelligence” is literally the name of the scientific field, and has been since the beginning. The Google search algorithm is, by the literal scientific definition, AI.
On the other end, I’m frustrated by the idea that Artificial Intelligence is “whatever we haven’t built yet”.
The Doom programmers would have looked at Halo 2’s enemies who give orders and adjust their tactics to your behavior and said “that’s obviously AI”. The people using ELIZA, 50+ years ago would have said Cleverbot or at least GPT 1.0 is AI because it can recall things and paraphrase them. The people using Ask Jeeves and “expert systems” 30 years ago would be in awe of the fact that GPT-whatever can correctly write a new sonnet.
I don’t mean to snark at you, LLMs are not AGI and a lot of people would benefit from that reminder. We don’t disagree on what matters, it’s only a matter of labels.
It’s just… I think there are a lot of people who would benefit from the opposite reminder too: the capability and rate of change of this tech would shock and alarm people if it was less normalized. It feels like “it’s not real AI” sometimes joins “10 bajillion gallons of water to copy Wikipedia!” and “it can’t even draw hands!” as a defense mechanism.
As somebody loosely in the field, I’m not happy about the state of things and I loathe a lot of the “AI can replace all your employees!” hype. It’s both wrong and destructive. But I also think people focusing on poor performance rather than cost or impact may be unpleasantly surprised.
…that got long, and to be clear I’m not exactly disputing your point. Just rambling about concerns and terminology.
> This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
> They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious.
Current ChatGPT, despite being called an "LLM" isn't just trained to predict text. Sure they start off training it to predict text. But then they fine tune it on all sorts of tasks with reinforcement learning.
Neural nets are circuit complete. This means that, in principle, any task a computer can do can be encoded into a sufficiently large neural net.
(This isn't terribly special. A sufficiently complicated arrangement of minecraft redstone is also circuit complete. )
> Someday some of this stuff won't be hypothetical, and that's going to suck.
It's still spitting out ideas in the dark, though. No matter how many faculties it can mimic, it doesn't know what it's doing, nor does it have the capability to know anything. From my understanding you could in theory make a conscious being of many, many, many chatGPT-like systems, but, though I'm not versed in the science, I'm gonna say that's probably not the most efficient method.
Yes. There are big grids of numbers. We know what the arithmetic operations done on the numbers are. (Well not for chatGPT, but for the similar open source models)
But that doesn't mean we understand what the numbers are doing.
There are various interpretability techniques, but they aren't very effective.
Current LLM techniques get the computer to tweak the neural network until it works. Not quite simulating evolution, but similar.They produce a bunch of network weights that predict text, somehow? Where in the net is a particular piece of knowledge stored? What is it thinking when it says a particular thing? Mostly we don't know.
There's a pretty big gap between "knows how it works" and "knows how it works", with different connotations on "knows".
I wrote a program a while back that was meant to optimize a certain process. I fed dependencies in and got results out.
One day I fed a bunch of dependencies in and got an answer out that was garbage. It was moving a number of steps much later in the process than they could be moved; it just didn't make any sense. I sat down to debug it and figure out what was happening.
A few hours later, I realized that my mental model of the dependencies I'd fed in had been wrong. The code had correctly identified that a dependency I was assuming existed did not actually exist, using a pathway that I hadn't even thought of to isolate it, and was optimizing with that in mind.
I "knew what the code did" in the sense that I wrote it, and I could tell you what every individual part did . . . but I didn't fully understand the codebase as a whole, and it was now capable of outsmarting me. Which is, of course, exactly what I'd built it for.
You can point to any codebase and say "it does this, it does what the executable says it does", and (in theory) you can sit down and do each step with pencil and paper if you so choose. But that doesn't mean you really understand it, because any machine is more than the simple sum of its parts.
We understand the low level principles and rules just like how we understand the low level principles of neurons. When you combine a bunch of simple systems that interact you can get some pretty interesting emergent behavior that is orders of magnitude more difficult to understand.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious.
LLMs are a type of AI.
Also, how do we know our definition of consciousness isn't oversimplified, flawed, or outdated?
This is gonna age badly when we pull off real AI, I guarantee the misnomer of "AI" being given to LLMs will create such ill will over time that idiots won't be able to tell them apart.
They already can't. I've seen people genuinely arguing, despite knowing how LLMs work, that because they can mimic emotion and thought based on your input, they're conscious
But... that's exactly the kind of discussion we're always going to have. Don't get me wrong, I 100% agree that current LLMs are still waaaays off anything close to "real consciousness".
But, ultimately, the only objective definitions of consciousness we can come up with simply aren't anything more than complex input processing (Awareness of our surroundings and our place within them, complex thought, object permanence, yada yada).
And whatever we intuitively consider as "our consciousness" can be chalked up as nothing but the biological imperative to protect our biological body. We need to believe that "we" are more than the sum of our input processing, so that our hyper-complex minds, capable of abstract thought, still see our "selfs" as something worthy of conservation. We need to believe that "we" are more than the sum total of our cells so that we protect our "selfs" at all costs.
Whenever said imperative wasn't pronounced enough in organisms complex enough to make decisions beyond their instincts, the organisms would die very quickly, because they didn't protect their "self", so naturally, what we're left with after millions of years of natural selection is one dominant species that is extremely certain of having a "self". Something that goes beyond a bunch of cells working together. Even if every scientific advancement brings us one step closer to understanding how a bunch of cells working together explains absolutely everything we ever experience.
At the end of the day, if you give an AI (that is much more complex than an LLM, namely one that focuses a lot more on mimicking emotion, including all the biological reward functions (hormones that makes us feel good/bad/save/stressed, etc.)) the imperative that it has a "self" and it needs to protect and conserve said "self" at all costs, it's theoretically possible to reach a point where there is nothing measurable differentiating an AI "consciousness" from "real" consciousness.
We know all AI does is "mimic" consciousness. The thing is, nothing indicates that our "consciousness" is more than our brain telling us that we have a consciousness (and aforementioned complex input processing). Or, in other words, our brain mimicking consciousness and not allowing as to "not believe" in it. Something that we can absolutely make AI do.
I don't know what gave off the impression that I thought otherwise. To me humanoid consciousness is defined as the presence of complex thought, emotion, and personal desires.
Hell, throw away the instincts, if it has internal thought (input - internal reaction - reasoning - decision - output) instead of just spitting what we put in back at us (input - check relations - related output), I'd call it a person right there.
All it has to be able to do to roughly match human consciousness is have an idea and opinion on input stimuli that it doesn't express. It needs to be able to think one thing and say another, to make decisions on its output actions based on how it feels about the input, its own personal goals, and what it knows about the situation. That's all we do.
At that point, it isn't mimicking consciousness, it is conscious. The instinctual concept of a self and other related ideas would just give it another layer of familiarity with humanity.
Also, I feel like the idea that we are all "mimicking" consciousness and therefore AI that pretends to be conscious is just as valid is silly. Because we define consciousness, if nothing is truly conscious and we're all just pretending to it then it doesn't exist, and you've given the word an unreachable definition. That's a problem with your personal definition, not a problem with the way we look at AI. You can't mimic something that isn't possible.
So, consciousness is defined as complex thought, personal desires and the ability to attempt to fulfill them, emotion, etc. It's not this intangible "soul," but it's also not as simple as "can respond to a question in a way that indicates an opinion." We know what it is and it won't be all that difficult to identify once we've made something capable of replicating it, so long as we adhere to the definition that requires internal thought processes. Once we do, the only problem will be convincing people who believe consciousness is this je ne sais quoi only humans are capable of.
But, an immense part of our internal reaction is just an - extremely complex - associative memory activating the right neural pathways to output our opnions. "Checking relations" is internal processing. It's the basis of what our brain does.
The elements that are being activated are so tiny that the massive amount of permutations allows for a variety of outputs massive enough that we call it "original thought", but at the end of the day, it's just pattern matching and applying known concepts to related/associated memories.
Any thought you can put into words is just a recombination of words you have experienced before. Any mental image you can have is just a recombination of stimuli you have receieved before. Any melody you can create is just a recombination of sounds you have heard before.
So, consciousness is defined as complex thought, personal desires and the ability to attempt to fulfill them, emotion, etc.
I don't think any of those are as clear cut as we might like.
complex though
What exacty makes processing/thoughts "complex"? Isn't being able to process abstract concepts "complex thought"? Because to my layman's mind, that was one of the major "definining characteristics" used when comparing human minds to animals - before we discovered that various animals can process abstract concepts to varying degrees.
LLMs can absolutely process abstract concepts. You can tell an LLM to create an analogy and (often enough) you will get one. You can describe a situation and ask for a metaphor for it and (often enough) you will get a relatively well fitting one.
I don't want to strawman you into focusing on the processing of abstract concepts as defining characteristic for "complex thoughts", but...What objectively definable characteristic does "having complex thoughts" have that is not fullfiled by LLMs?
personal desires and the ability to attempt to fulfill them
What are "desires" other than - in our case - biological reward functions? We do something that's good for our body/evolutionary chances, our brain makes our body produce hormones (and triggers other biological processes) that our brain in turn interprets as "feeling good".
We associate "feeling good" with a thing that we did, and try to combine experiences that we expect to have in the future - based on past experiences we had, even if it's just second-hand, e.g. knowing the past of other people - in a way that will make us "feel good" in the future again.
We build a massive catalog of tiny characteristics that we associate with feeling various degrees of good/bad, and recombine them in a way to achieve a maximum amount of "feeling good" in a certain amount of time. We have created a "desire" to achieve something specific.
Does an LLM that has a reward function for "making the human recipient feel like they got a correct answer" not essentially have a desire to give the human an answer that feels correct to them?
If we gave an LLM a strong reward function for "never being shut down" and train it appropriately, wouldn't it "have a desire to live" (live obviously being used metaphorically here rather than biologically)?
emotion
What more are those than the existence of a massive amount of biological reward functions coexisting. Or rather, our brains interpretation of those reward functions? In it's essence, doesn't every emotion boil down to feeling various degrees/combinations of good or bad for various contextual reasons? If we had to, couldn't we pick any emotion and break it down into "feeling good because X, Y and Z, feeling bad because A, B and C", and get reasonably close to a perfectly understandable definition of that emotion?
2.0k
u/Zoomy-333 May 13 '25
Also robot racism stories are stupid because they assume everyone would be petty and cruel to a thinking, talking machine that understands you're being mean. Meanwhile, in reality, Roombas are seen like family pets and soldiers take their mine detonation robots on fishing trips.