Even by pessimistic progress projections AI will be better than most at coding before this is a long term issue. And the output multiplier I'm seeing I suspect the number of SWEs needed to achieve the same task is going to start shrinking fast.
I don't think SWE is a promising career to get started in today.
im not convinced that it will.
you dont notice it when you are only looking at your own harness results, but the llm bakes so very much of your own skills and opinions into what it does.
LLMs still regurgitate a ton.
But you have to be good at SWE to be good at security engineering and sysadmin, and the demand there is skyrocketing.
We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.
Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.
Perhaps at some point, but tokens are expensive and the major providers are burning through cash.
I suppose it's like bandwidth cost in the 90s. At some point, it becomes a commodity.
There’s certainly a lot of uncertainty.
But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)
Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.