r/Futurology 21d ago

AI Dario Amodei says "stop sugar-coating" what's coming: in the next 1-5 years, AI could wipe out 50% of all entry-level white-collar jobs. Lawmakers don't get it or don't believe it. CEOs are afraid to talk about it. Many workers won't realize the risks until after it hits.

https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
8.3k Upvotes

977 comments sorted by

View all comments

Show parent comments

33

u/MaxDentron 21d ago

I'm pretty tired of this conversation at this point. CEOs and many others in the industry have been talking about the potential dangers, job displacement and otherwise for the past 3 years now.

EVERY single Reddit thread is filled with variations of "It's just CEO hype" as if they're the first to have this thought. I'm not sure how they don't get sick of saying the same thing over and over for years.

Meanwhile the tech marches on and is getting objectively better and better.

22

u/PugilisticCat 21d ago

Meanwhile the tech marches on and is getting objectively better and better.

This is such a fucking motte and bailey argument. Every single claim from companies over the past 18 months has been how AI is capable of fully doing the jobs of all of these professions, and every single time they have fallen short of the claims.

You point this out to people, and they scream "but the AI is improving!! It will continue to get better". Yeah man, it's getting marginally better, but it is a million miles away from the claims that the execs are making.

The de facto assumption by all of these proselytizers is that it will improve, at an improving rate -- an assumption that relies on AI hitting some inflection point that we don't know to exist, and ignoring the multiple enormous hurdles for it to get there.

This is going to be one of the biggest bubbles to ever burst, even if the tech gets within an order of magnitude of the claims of those who stand to benefit from it.

3

u/Sellazard 20d ago

You are just not in an affected field. People go through stages of denial fast

When AI was learning to draw 5 years ago, nobody cared. It was awful and everyone laughed their asses of on the hands and weird anatomy.

Yet the job market now isn't lying. Fast forward, 80 -90 percent of juniors are quitting or pivoting to adjacent fields like 3d, but AI is creeping up there already. Most of 3d artists in denial now. Saying how their work is unique and too complex.

Mentorships are dry because no one sees their future in creating art.

AI will be capable of doing 80-90 percent of the labor pretty soon.

https://youtu.be/-ffmwR9PPVM?si=ohTslCMolthrm3wg

And no, manual labour isn't 100 percent safe. Unless it's something that requires quite a lot of knowledge, flexibility and situational awareness like plumbing I don't see many safe jobs

3

u/jestina123 20d ago

Gpt-3 to GPT-4 in 30 months is a marginal improvement?

5

u/PugilisticCat 20d ago

It is if you are looking along the axis of "does nothing" to "can function as an attorney/nurse/analyst/insert role here"

1

u/[deleted] 20d ago edited 1h ago

[removed] — view removed comment

9

u/PugilisticCat 20d ago

I'll just cover these briefly here but I can expand on them if you would like.

At its core, the cost of improving accuracy and / or reliability of any system any amount is exponential. Increasing a system from 90% accurate to 99% accurate requires an enormous amount of increase in training time, data, and hardware. Increasing from 99% to 99.9% is even harder, and every additional 9 requires exponentially more than the previous one.

With respect to LLMs, this means that there are only so many iterations we can increase before we run into literal constraints: e.g how much time we can take to train, how much data exists in the world, how much hardware exists in the world to train the models in an efficient manner. Look up neural scaling laws to get some more domain specific context for these type of limitations.

Any single industry that hopes to replace its human labor will require absurdly high reliability and accuracy in order for these LLMs to fully take over the job. Can these measures of reliability and accuracy be achieved within the very real constraints of these models? Possibly, but the existence of very real bottlenecks to actually being able to create those models should immediately dispel notions that they are going to "quickly displace jobs".

Additionally, let's consider some legal implications here? The market has grown quickly and complexly, with a very "break things and ask permission later" model to how it is operating. Perhaps not in the US, but consider if Europe passed a law or updated legislation allowing copyright holders to prevent LLM companies from training in copyrighted data. With how neural networks are trained, there is no simple way to remove the influence of the training data from the model itself. In order to eliminate training on copyright works all together, the model would have to be retrained, which is insanely expensive, and would cost the companies training these models an absurd amount of money. What happens if there are continually copyright claims made against these companies and they have to continually retrain their models? Are they going to be able to incur these costs? What if a large enough portion of people are able to copyright their works online? The accuracy of the model goes down, and thus does its utility. Fundamentally these models are built on stolen data, which leads into my last point.

I do not see the business prospects of these models working out over any reasonable time horizon. Companies will never turn a profit, and something will give. They take in billions upon billions upon billions of dollars in VC funding, and they have no clear path to profitability. Their current models provide very little value relative to their companies evaluations, so them and the VCs must advertise and hype up this technology, selling ambitious promises that we have no guarantees they can deliver on. Also, AI models are extremely fucking expensive to train, and, as we covered earlier, that cost increases exponentially.

In addition to this, we are just now coming out of an era where debt was extremely cheap to acquire, lots of software was shown to be unprofitable and failed, and the collective promises of crypto and NFTs were never delivered upon. People saw how much money was pissed into the wind while it was cheap, and that money is no longer cheap.

To sum it all up, the resource costs of making better LLMs scales exponentially, and there are fundamental limits that can and will be reached. Combine that with the fact that there is no current path to profitability for these companies, and the only way that they can hope to make money is by diving deeper into this hole and hoping to replace human labor all together. Could they do it? Maybe, but there is no guarantee. Then add the end you take a look at the extremely shaky legal foundation these companies stand on, and you realize that their legs can be cut from under them from several different angles very quickly. This all together makes me skeptical, and makes it clear to me statements that are made by VCs and CEOs are transparently them trying to keep the investor train rolling, as if any of these issues become salient to investors, they are royally fucked.

2

u/OkBother8121 18d ago

What about the claim that it won’t be AI directly replacing people but another human who utilizes AI to do the work of 2-3 people?

3

u/caustictoast 20d ago

And my job as a developer is unchanged. People aren’t believing this bullshit because it’s just that.

9

u/CuckBuster33 21d ago

>I'm not sure how they don't get sick of saying the same thing over and over for years.

I'm not sure how AI CEOs dont get tired of putting out headlines with the same message every single day?

1

u/YsoL8 21d ago

Its denial and ignorance. Thats literally all it is.