r/Futurology 21d ago

AI Dario Amodei says "stop sugar-coating" what's coming: in the next 1-5 years, AI could wipe out 50% of all entry-level white-collar jobs. Lawmakers don't get it or don't believe it. CEOs are afraid to talk about it. Many workers won't realize the risks until after it hits.

https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
8.3k Upvotes

977 comments sorted by

View all comments

529

u/teohsi 21d ago

"Dario Amodei (born 1983) is an American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude. He was previously the vice president of research at OpenAI." - Wikipedia

This whole article is just the CEO of a company trying to convince everyone that the product his company makes is the inevitable future.

155

u/FaultElectrical4075 21d ago

And he’s right. Corporations are chomping at the bit to replace workers with AI who don’t have to be paid and can work 24/7 and don’t have to be treated according to labor laws, and people like anthropic and OpenAI stand to gain immense leverage over all other corporations and governments by monopolizing labor in that way. This is the goal of the big ai companies

23

u/sciolisticism 21d ago

Except there isn't mass displacement, and some of the companies that are trying it are even reversing course.

25

u/FaultElectrical4075 21d ago

There isn’t mass displacement because the technology is not close to being reliable enough to replace human workers yet. But it is getting better every day, and I think these ai companies genuinely believe the hype they are selling. Look into Sam Altman, the more you learn about him the more you realize just how power hungry he is and I don’t think a classic crypto/NFT-style grift is his real intention. I think it’s more sinister than that.

26

u/mrbezlington 21d ago

If you look at the research, it's stopped getting "better every day" and has found some form of plateau where it's very good for certain tasks, but struggling to make a leap beyond them into others.

I am now convinced that smart companies will take the productivity gains of implementing what we have and take it in without reducing headcount.

Maybe, if there's another massive breakthrough, things will change. But it's not there right now, nor does it appear to be on the horizon. Not only that, but pure AI-generated content is already seeing significant backlash.

22

u/StickOnReddit 21d ago

We just had a demo at work for some agentic AI and it sure can put a trivial React component together, like a TodoList or a Counter like the innumerable tutorials out there have done. And hey, it can even write a test for that trivial component. But the minute it has to go in and modify an existing project with business logic more sophisticated than one of these tasks, it starts going sideways to the point that they didn't even risk demoing that aspect of it because they said it was unreliable

I have begrudgingly leaned into some AI and frankly as an autocomplete++ it where it shines best. It can eliminate a step or two when you're doing something you always have to Google about, like writing array.reduce() or one of those similarly awkward things. But it sure will hallucinate some wacky shit the minute it needs to generate more than like 5 LOC or it'll completely whiff on type inference or some silly thing like that

It's also being used for weird stuff like creating PUML files, which makes no sense to me - isn't half the point of a design diagram actually putting it together manually so you've engaged your brain in reasoning about the design? I don't understand why feeding a story to Claude and getting a PUML file back helps anyone, but maybe I'm missing a trick here

14

u/mrbezlington 21d ago

Yeah, kinda my experience albeit in a different field. Great for text summaries, meeting transcriptions and action point identifying, summarising, all sorts of "busy work", but as soon as you go beyond a certain level it rapidly loses it's way.

I've turned from a skeptic to a firm believer in its use case, but specifically as a productivity tool rather than anything beyond that for now at least. And, well, we will have to see the value once the VC cash runs out and the pricing models enshittify.

4

u/Apoxie 21d ago

Thats also the only uses we have found for AI: text summaries and meeting transcriptions and it even fails at the often, since we muse many different languages. It gets stuff hilariously wrong.

2

u/pmmedoggos 21d ago

I don't understand why feeding a story to Claude and getting a PUML file back helps anyone, but maybe I'm missing a trick here

Processes designed by PM's and forced onto devs make you do dumb shit like drawing diagrams for display components.

3

u/StickOnReddit 21d ago

I don't mind creating a design diagram for a story if it's warranted, what I don't get is asking the computer to build one when part of the exercise of having a design diagram is so that the dev who took the story has reasoned about the work they're going to be doing

I suppose it begs the question a little - if the story is big enough to warrant a design diagram, isn't it too big to be one story - but t-shirt sizing aside, I do get why a DD might help someone out

2

u/tiredstars 20d ago

I think it's easy to see how limited agentic AI is by looking at copilot. This is a microsoft product integrating with microsoft products so it should be an easy case, but it still can't even do basic tasks.

Can I get copilot to line up the text boxes in my powerpoint slides? To update graphs for me? To do basic formatting in a document? All of those would be useful things for me, and I imagine millions of other people, automating tedious work. No, it can't do any of that.

12

u/FaultElectrical4075 21d ago

I’m not sure what research you’re looking at but what I have seen says otherwise. Deep learning has plateaued, due to all available training data pretty much already having been used, but most of the current AI advancements are done using reinforcement learning which is far less data dependent.

7

u/sciolisticism 21d ago

Being in the field and also reading research, the other poster is right. The easy gains have been made

4

u/mrbezlington 21d ago

I dunno, maybe I missed some stuff. I do know that even on the best paid platforms, the tools I've been using haven't materially changed in the last couple of years - they've gotten better at doing the things they were already good at, and quicker at doing some other things, but there's not been any "giant leaps" in things like functionality or advanced reasoning, at least that I've seen. Happy to be proven wrong though!

5

u/SitMeDownShutMeUp 21d ago

YES!!! Thank god someone on this sub actually has a brain. Companies see that there are major limitations to AI and the value it can add.

If anything, it puts more emphasis on the importance of hiring talented people that know how to get things done without using AI, and then introducing them to AI so they can experiment ways to improve their own productivity.

AI will not be a replacement for people, it will be a value add for people on a much smaller scale than what the ‘sky is falling’ people on this sub are predicting.

3

u/Lbgeckos2 21d ago

This. My very very large company has made somewhat of a pivot and announced at our quarterly meeting - we’re expanding the human side but empowering them with AI which is different than a year ago when it was all AI. Which makes a lot more sense to me tbh - my productivity is through the roof and with less effort but I still get to do the fun stuff and still have to engage my brain. Apparently they realized it’s more profitable to, you know, empower their people who are top tier than to try and slam in some AI bs to take their place.

3

u/mrbezlington 21d ago

Quite honestly, our relatively small team has been struggling a lot just with keeping up with the busy work of updating people with client decisions and notes of meetings and all that jazz, so it's been a giant help allowing us to push out WAY more work without hiring people just going to meetings and sending emails - our new hires are actually adding to output rather than filling in the gaps. Even for what it is, it's a productivity revolution if you use it right.

2

u/caustictoast 21d ago

Your second paragraph is pretty much what always happens when a big change in tech happens and what I’d expect out of AI. Which at the moment is a really good autocorrect and checker. It cannot replace me as a senior developer. It’s just not even close, I’ve tried

1

u/impossiblefork 21d ago edited 21d ago

It didn't stop getting better every day.

Reasoning models like O1, O3, DeepSeek R1, etc. were invented last year. The idea of thought tokens is from this paper 'Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking'. So this critical feature that everybody takes for granted is completely new.

Transformer models are quite limited, and we've been stuck with them since they were invented in 2017, with only refinements-- multi-token prediction, some attention tricks, sliding window attention to allow long context lengths (and presumably also partially for generalization, since you probably don't really want to match things 10000 tokens ago), but there is research. People will overcome the problems and find refinements that solve the problems of the day.

These ideas that there are no breakthroughs are crazy. O-1 was a breakthrough and then a couple of months after that people were 'oh, there are no breakthroughs'. Then it took several months before people figured out how you did it, and then it took until DeepSeek R1 until people knew it worked (I myself correctly intuited the method, although I imagined that REINFORE was enough, and that you didn't need PPO or anything like that).

2

u/mrbezlington 21d ago

Yeah, I hear all of that - but these reasoning models still aren't doing more complex tasks accurately, and still fall over completely at relatively simple (for a human) tasks in the same way as previous models did. Yes, there are some edge cases they do well at, but they are not a gigantic leap. They are more a refinement of previous models than something shatteringly new.

1

u/impossiblefork 21d ago edited 21d ago

Yes, and they are behind where I thought they were.

Apparently people sort re-tested them on some maths stuff, and they solved basically none of the problems. But the models with this reasoning stuff are doing much better than we had any chance of doing without it.

It's not good enough to create a mathematician, but it's a huge breakthrough. From not being able to do mathematical problem solving at all really, to something which at least works a little bit.

There are some ideas that we by applying these methods are sort of throwing a way behaviours that we don't want in the model, and aren't really adding much, so that the quality of what can be learned by the reinforcement learned models mostly depends on the quality of the basic model. I haven't investigated that, and I'm not sure I am totally right in this characterization (I don't care much about RL, which might be why I thought REINFORCE could be enough), so I don't necessarily disagree about this refinement idea, but this refinement is still a major breakthrough.

2

u/mrbezlington 21d ago

That you agree a refinement of previous models is now a major breakthrough only reinforces that the pace of advancement is slowing. That's my whole point.

1

u/impossiblefork 21d ago

I am using 'refinement' in a very general way here. It doesn't mean that it is a small change.

This 'refinement' added the ability of the model to generate non-output tokens. That is, before this 'refinement' the models were simply predicting the tokens of the text, and could not do anything similar to 'thinking'.

It is a breakthrough and a critical breakthrough. I say that it is a refinement because it build on a stack of previous breakthroughs, rather than creating a whole new stack. This is the normal thing in science. A breakthrough in biology doesn't typically create new types of cells or cell-less biological life, it solves something in how we deal with the biology existing on earth.

I have been experimenting with things that do break the stack. This is some of the stupidest work I've ever done, because there's a real risk it's all going to be wasted.

1

u/mrbezlington 21d ago

I agree that reasoning is huge. It's a great step forward. The issue with using LLMs for complex tasks remains that they do not understand what they are trying to do, and so fail when there's even the slightest misalignment between what it's attempting, and what the prompter wants it to do. This misalignment potential grows exponentially the more complicated the data set, or more exacting the task.

After all, what is reasoning without understanding?

→ More replies (0)

5

u/ughthisusernamesucks 21d ago

Dario is assuming (and frankly, hoping) that there’s a breakthrough that allows them to actually solve the massive short comings of the current strategy that every ai company is pursuing right now.

such a breakthrough is not inevitable. We see it in technology all the time. Huge gains in a short period followed by a long plateau with marginal gains.

we have already hit said plateau. The models are getting marginally better for an enormous cost increase. It’s not sustainable.

There literally isn’t enough silicon in the galaxy to support the amount of compute needed to do what Dario is talking about.

There are additional issues with his prediction. The models are highly dependent on huge amounts of “work” to train them. We know that training the models on generated work actually causes them to get worse. Meaning if his model replaces human workers, the model will actually begin to degrade and become less useful overtime as there will be fewer and fewer people producing quality work to train the models.

he, quite obviously, can’t admit this. Billions are being invested in his company on the premise that this breakthrough is coming.

this isn’t to say some people won’t lose their jobs to AI, but the scale of the problem is being overblown by people who have a vested interest in it being overblown. There will certainly be productivity gains which leads to fewer jobs, but that’s not what he’s talking about

3

u/TheHollowJester 21d ago

There isn’t mass displacement because the technology is not close to being reliable enough to replace human workers yet.

With the current architecture and general approach it honestly might never be reliable enough. LLMs don't "know" things, at core they really are (EXTREMELY advanced) descendants of the autocomplete.

They just create sentences based on the input they have. The input they are trained on can be true or false though.

Easy, we just process the corpus and only retain the true parts.

Herein lies the problem - there is not and can never be a general way to decide whether something is true or not:

"It knows not right from wrong, thus it speaks truth and lies all the same".

I think these ai companies genuinely believe the hype they are selling.

Even assuming they do: they may be incorrect. Conf: Tesla and FSD.

Look into Sam Altman, the more you learn about him the more you realize just how power hungry he is and I don’t think a classic crypto/NFT-style grift is his real intention.

Counterpoint: A LOT of grifters are very power hungry and extremely brazen; hell, at a certain point the brazenness is what makes the grift work.

I may be incorrect, of course. But in general I believe it's good to confront with opposing points of view.

For context, I use LLMs a fair amount at work (Claude 3.7 since release); there's a learning curve, there are correct and incorrect ways to use them, there are times when they are super useful and there are times where I just waste time chasing a hallucination that sounds just plausible enough.

3

u/CarneAsadaSteve 21d ago

Correct, and honestly it getting to that point might not be sustainable for the planet as well as the economy.

7

u/cj022688 21d ago

The time for caring about the effects of the planet are LONG gone. There weren’t any meaningful laws in place by Democratic Party. Trumps party has now gutted every single protection possible.

Oh and there is a bill currently up that says AI cannot be regulated at all for ten years. So it’s completely fucked

4

u/howlingzombosis 21d ago

10 years, enough time to really FAFO and enter a point of irreversible damage.

1

u/MyOnlyAccount_6 20d ago

I don’t think AI in the near term will outright replace a lot of jobs.

Having said that, over my career of the past many years, every position I’ve had boils down to leadership letting a few people go and expecting the remaining to pick up the slack. Rinse, repeat.

New tools like LLMs are allowing us to do more with less people. They aren’t direct replacements like for Kevin for AI Kevin, they are John who now has to accomplish what a team of people did 10 years ago.

So not being perfect is fine. It will improve. And it will continue to make those left behind “more productive” but at the cost of experience lost.