I would encourage my competitors to use AI agents on their codebase as much as possible. Make sure every new feature has it, lots of velocity! Run those suckers day and night. Don't review it, just make sure the feature is there! Then when the music stops, the AI companies hit the economic realities, go insolvent, and they are left with no one who understands a sprawling tangled web of code that is 80% AI generated, then we'll see who laughs last.
Both be true at the same time: some teams spend a fortune on AI and the AI investments won't get the expected ROI (bubble collapse). What is sure is that a lot of capacity is been built and that capacity won't disappear.
What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.
Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.
Qwen3 Coder Next and Qwen3.5-35B-A3B already very good and can be run on today's higher end home computers with good speed. Tomorrow's machines will not be slower but models are keep getting more efficient. A good sw engineer still would be valuable in Tomorrow's world but not as a software assembler.
> Don't review it, just make sure the feature is there!
Bad idea. Use another agent to do automatic review. (And a third agent writing tests.)
Don't forget the architecting and orchestrating agent too!
> they are left with no one who understands a sprawling tangled web of code that is 80% [random people that I can't ask because they don't work here anymore and they didn't care to leave docs or comments] generated, then we'll see who laughs last.
Yes, this matches my experience with codebases before AI was a thing.