Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.
While true, my personal fear is that the higher-ups will overlook this fact and just assume that AI can do everything because of some cherry-pick simple examples, leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.
Just keep in mind that there are many highly motivated people directly working on this problem.
It's hard to predict how quickly it will be solved and by whom first, but this appears to be a software engineering problem solvable through effort and resources and time, not a fundamental physical law that must be circumvented like a physical sciences problem. Betting it won't be solved enough to have an impact on the work of today relatively quickly is betting against substantial resources and investment.
A lot of this can be provided or built up by better documentation in the codebase, or functional requirements that can also be created, reviewed, and then used for additional context. In our current codebase it's definitely an issue to get an AI "onboarded", but I've seen a lot less hand-holding needed in projects where you have the AI building from the beginning and leaving notes for itself to read later
It's not binary. Jobs will be lost because management will expect the fewer developers to accomplish more by leveraging AI.
Can you give an example to help us understand?
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
We're all slowly but surely lowering our standards as AI bombards us with low-quality slop. AI doesn't need to get better, we all just need to keep collectively lowering our expectations until they finally meet what AI can currently do, and then pink-slips away.
> Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying.
Um, you do realize that "the memory" is just a text file (or a bunch of interlinked text files) written in plain English? You can write these things out yourself. This is how you use AI effectively, by playing to its strengths and not expecting it to have a crystal ball.
Apparently you haven't seen ChatGPT enterprise and codex. I have bad news for you ...
[dead]
"The steamroller is still many inches away. I'll make a plan once it actually starts crushing my toes."
You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.