> the end result is the same but the method is very different.
I dont think anyone really cares at all about LLM code that is the exact same end result as the hand written version.
It's just in reality the LLM version is almost never the same as the hand written version, it's orders of magnitude worse.
> it's orders of magnitude worse
This is not my experience *at all*. Maybe models from like 18+ months ago would produce really bad code, but in general most coding agents are amazing at finding existing code and replicating the current patterns. My job as the operator then is to direct the coding agent to improve whatever it doesn't do well.
In the limited use cases I've used it, it's alright / good enough. But it has lots of examples (of my own) to work off of.
But a lot of people don't think like this, and we must come to the unavoidable conclusion that the LLM code is better than what they are used to, be their own code, or from their colleagues.
So far, I haven't seen any comparison of AI (using the best available models) and hand written code that illustrates what you are saying, especially the "it's orders of magnitude worse" part.