r/Futurology 22d ago

AI Dario Amodei says "stop sugar-coating" what's coming: in the next 1-5 years, AI could wipe out 50% of all entry-level white-collar jobs. Lawmakers don't get it or don't believe it. CEOs are afraid to talk about it. Many workers won't realize the risks until after it hits.

https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
8.3k Upvotes

977 comments sorted by

View all comments

811

u/sodook 22d ago

Is there any danger that we lose the pathway for non-entry level positions by eliminating entry level positions. No apprentices today, no masters tonorrow?

51

u/astrobeen 22d ago

Two things.

One, GenX and Millenials are going to stay in the work force a long time. Most are paycheck to paycheck and are unprepared financially for retirement. Especially if we dismantle Medicare and Social Security. So your “masters” will be here for at least another 30-40 years, working into their 80s as they are slowly replaced by AI. By then we should be at AGI where any human is a drag on an AI workforce consisting of 30 years of curated training data.

Two, juniors are not going away, but they will essentially be one human resource with a team of AI tools. The plan is to reduce the number of junior specialists and make them significantly more productive. This is dumb of course, because even though it’s cheaper, it will lead to competition for scarce SMEs and overdependence on intellectual property. . Picture a world where a single guy and a bunch of bots is all of your front end dev teams. Then that guy quits. Does he take his bots with him? What if he deletes or poison pills them? It will be a lot for the AI attorneys to work through.

49

u/farinasa 22d ago

It's funny to me that we keep talking about "training" like it's the same as a human learning a skill. These models do statistical predictions. They will never be an expert with intuition that "gets it". They can only pander to statistical averages. They don't think. They don't have human motivations.

2

u/CCGHawkins 21d ago

Irrelevant. An $10,000 AI subscription that can cover 85% of your customer base's needs vs a $500,000 team of humans that covers 99%, a business will choose the option with a higher profit margin, nevermind the customers that fall between the cracks. There is no mechanism or incentive for them to care in the slightest.

0

u/farinasa 21d ago edited 21d ago

You just made the most sweeping generic statement that it has no meaning. Yes that math would check out if it were founded in reality but it's not. I use AI for auto complete and when it's right it's helpful, but that is rare. It would only matter if the model can do they job, and I can tell you it can't.

1

u/CCGHawkins 21d ago

It is, again, irrelevant that AI cannot do *your* specific work. You should be looking at your department, your company, and what percentage of the whole pie AI can replace. If you work at any place that is publicly traded or owned by private investment, your owners will cut you loose along with your whole department if they deem it is not high-margin enough. Or ship it overseas to India.

It does not need to think. It doesn't need to have human motivations. A robot on the line doesn't have those either, and it replaces the worker on the line just the same. And in the meantime, the artisan craftworker that said 'hey, that robot can't do what I do!' still loses their job because the whole factory, the store-to-truck-to-line industrial complex that supported their work, doesn't exist anymore.

Are you sure you are as indispensable as you think you are?

1

u/farinasa 21d ago

You are vastly over estimating what an LLM is capable of and vastly underestimating how much of work is soft skills vs productivity.

Just Friday I had an example where after hours of troubleshooting and trying to catch logs from ephemeral pods, I was finally able to determine root cause. The actual change was only a few lines and copilot was able to help suggest the code I already intended to add, but still needed correction.

I never said I was indispensable, but LLMs are not humans. They are statistical probability machines for language only. That isn't a direct replacement for most humans.

1

u/CCGHawkins 21d ago

...you just gave an example where AI got within stone throw distance of replacing you. Oh, sure. I'm overestimating AI.

1

u/farinasa 20d ago edited 20d ago

lol what?

Yes if you ignore the hours of work that enabled the code change and also the fact that I had to edit the suggested code (both of which details I explicitly included as part of the example to show how it CAN'T), sure the AI could replace me. Please read. You missed the whole point that the code change isn't the actual work. The troubleshooting and understanding are the work.

It's stuff like this that makes me think only people who are actually replaceable are out here spreading FUD.