I have seen more reactions of people about this tech than actual implementations made possible which pushed the boundaries further. It is an amplifier of technical debt in mostly naive(people experienced in bad patterns) user base.
Take anthropic for example, they have created MCP/claude code.
MCP has the good parts of how to expose an API surface and also the bad parts of keeping the implementation stuck and force workarounds instead of pushing required changes upstream or to safely fork an implementation.
Claude code is orders of magnitude inefficient than plainly asking an llm to go through an architecture implementation. The sedentary black-box loops in claude code are mind bending for anyone who wants to know how it did something.
And anthropic/openai seems to just rely of user momentum to not innovate on these fundamentals because it keeps the token usage high and as everyone knows by now a unpredictable product is more addictive than a deterministic one.
We are currently in the "Script Monkey" phase of AI dev tools. We are automating the typing, but we haven't yet automated the design. The danger is that we’re building a generation of "copy-paste" architects who can’t see the debt they’re accruing until the system collapses under its own weight.
Almost like we are making devs dependent on the tool. Not because of its capabilities but because there lacks an understanding of the problem. Like an addiction dependency. We are all crack addicts trying to burn more tokens for the fix.