The scary part is that codebases are getting layers of AI complexity, that it's going to cost $$$ to have the latest model decipher and make changes as no human can understand the code anymore.
Pretty soon there is no code reuse and we're burning money reinventing the wheel over and over.
Prior to the advent of LLMs, I had this concept of the 'complexity horizon' - essentially a [hand built] software system will naturally tend to get more and more complex until no-one can understand it - until it meets the complexity horizon. And there it stays, being essentially unmaintainable.
With LLMs, you can race right for that horizon, go right through, and continue far beyond! But then of course you find yourself in a place without reason (the real hell), with all the horror and madness that that entails.
> The scary part is that codebases are getting layers of AI complexity, that it's going to cost $$$ to have the latest model decipher
Isn't this a bit like old Java or IDE-heavy languages like old Java/C#? If you tried to make Android apps back in the early days, you HAD to use an IDE, writing the ridicolous amount of boilerplate you had to write to display a "Hello Word" alert after clicking a button was soul destroying.
The models today will happily slop over a single 1k loc react index component on a brand new project.
They really are bad for creating a healthy codebase
I genuinely think it's part of a psyop. If we bloat all codebases and eventually start printing the models on chips to reduce inference costs by 50-100x they'll take in massive profits from 5M line codebases instead of 350k