logoalt Hacker News

Why I'm not worried about AI job loss

67 pointsby ezekgtoday at 7:13 PM113 commentsview on HN

Comments

gordonharttoday at 8:00 PM

Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.

show 7 replies
ddtaylortoday at 7:45 PM

Labor substitution is extremely difficult and almost everybody hand waves it away.

Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.

Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.

This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.

show 6 replies
qgintoday at 8:09 PM

You don't need AI to replace whole jobs 1:1 to have massive displacement.

If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.

show 7 replies
nphardontoday at 7:49 PM

(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.

Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.

show 1 reply
RevEngtoday at 9:30 PM

I was with the author on everything except one point: increasing automation will not leave us with such abundance that we never have to work again. We have heard that lie for over a century. The stream engine didn't do it, electricity didn't do it, computers didn't do it, the Internet didn't do it, and AI won't either. The truth is that as input costs drop, sales prices drop and demand increases - just like the paradox they referred to. However, it also tends to come with a major shift in wealth since in the short term the owners of the machines are producing more with less. As it becomes more common place and prices change they lose much of that advantage, but the workers never get that.

delegatetoday at 8:53 PM

Bottlenecks. Yes. Company structures these days are not compatible with efficient use of these new AI models.

Software engineers work on Jira tickets, created by product managers and several layers of middle managers.

But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.

When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.

A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.

Latest models got really good at working on the entire puzzle - big picture and pieces.

This makes human hierarchy obsolete and a bottleneck.

The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.

This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.

nphardontoday at 8:45 PM

Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.

show 1 reply
trilogictoday at 7:56 PM

You are not worrried for one of the 2 reasons:

1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).

2 You prefer to persue no troubles in matters of complexity.

Time will tell, is showing it already.

show 3 replies
cal_denttoday at 9:30 PM

My view is that we spend a lot of time thinking that ai cant do x and y when the wider problem is the short to medium term redirection of capital to tech rather than labour.

Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.

Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth

ef2ktoday at 9:26 PM

The article frames the premise that "everything will be fine" around people with "regular jobs", which I assume means non knowledge work, but most of public concern is on cognitive tasks being automated.

It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.

It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.

show 1 reply
Nevermarktoday at 8:02 PM

> Bottlenecks rule everything around me

The self-setup here is too obvious.

This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.

It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.

I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.

show 3 replies
looneysquashtoday at 9:01 PM

Ordinary people aren't even ok now.

Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.

AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.

ej88today at 8:54 PM

i am somewhat worried in the short term about ai job displacement for a subsection of the population

for me the 2 main factors are:

1. whether your company's priority is growing or saving

- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete

- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors

2. how 'sequence of tasks-like' your job is

- SOTA models can easily automate long running sequences of tasks with minimal oversight

- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)

- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place

Flaviustoday at 7:36 PM

Maybe you should be a little worried. A healthy fear never killed anyone.

show 2 replies
827atoday at 8:31 PM

The take that I am increasingly believing is that Software Engineers should broadly be worried, because while there will always be demand for people who can create software products, whatever the tools may be, the skills necessary to do it well are changing rapidly. Most Software Engineers are going to wake up one day and realize their skills aren't just irrelevant, but actively detrimental, to delivering value out of software.

There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.

show 1 reply
SirMastertoday at 9:31 PM

I don't worry about it because worrying about it just seems like a waste of time and an unproductive, negative way to think about things. Instead I spend my time and thought not in worry but in adapting to the changing landscape.

Davidzhengtoday at 7:55 PM

No it's not a February 2020 moment for sure. In February 2020, most people had heard of COVID and a few scattered outbreaks happened, but people generally viewed the topic as more of a curiosity (like major world news but not necessarily something that will deeply impact them). This is more like start of March 2020 for general awareness.

simonwtoday at 7:39 PM

I read that essay on Twitter the other day and thought that it was a mildly interesting expression of one end of the "AI is coming for our jobs" thing but a little slop-adjacent and not worth sharing further.

And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403

It appears to have really caught the zeitgeist.

show 4 replies
RS-232today at 8:07 PM

The advent of AI may shape up to be just like the automobile.

At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.

After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.

Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.

Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4

show 2 replies
sunaurustoday at 8:26 PM

I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.

I’m definitely worried about job loss as a result of the AI bubble bursting, though.

mjr00today at 8:06 PM

I'm one of those developers who is now writing probably ~80% of my code via Claude. For context, I have >15 years experience and former AWS so I'm not a bright-eyed junior or former product manager who now believes themselves a master developer.

I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.

You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.

I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.

As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.

Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.

The type of AI fears are coming from things like this in the original article:

> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.

Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.

There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.

show 1 reply
RIMRtoday at 7:39 PM

> it’s been viewed about 100 million times and counting

That's a weird way of saying 80 million times.

lcfcjs6today at 8:04 PM

[dead]

6380176today at 7:42 PM

[dead]