They are more effective then on the ground in your face evidence largely because people who are so against AI are blind to it.
I hold a result of AI in front of your face and they still proclaim it’s garbage and everything else is fraudulent.
Let’s be clear. You’re arguing against a fantasy. Nobody even proponents of AI claims that AI is as good as humans. Nowhere near it. But they are good enough for pair programming. That is indisputable. Yet we have tons of people like you who stare at reality and deny it and call it fraudulent.
Examine the lay of the land if that many people are so divided it really means both perspectives are correct in a way.
If you want to be any good at all in this industry, you have to develop enough technical skills to evaluate claims for yourself. You have to. It's essential.
Because the dirty secret is a lot of successful people aren't actually smart or talented, they just got lucky. Or they aren't successful at all, they're just good at pretending they are, either through taking credit for other people's work or flat out lying.
I've run into more than a few startups that are just flat out lying about their capabilities and several that were outright fraud. (See DoNotPay for a recent fraud example lol)
Pointing to anyone and going "well THEY do it, it MUST work" is frankly engineering malpractice. It might work. But unless you have the chops to verify it for yourself, you're just asking to be conned.
I think the author is way understating the uselessness of LLMs in any serious context outside of a demo to an investor. I've had nothing but low IQ nonsense from every SOTA model.
If we're being honest with ourselves, Opus 4.5 / GPT 5.2 etc are maybe 10-20% better than GPT 3.5 at most. It's a total and absolute catastrophic failure that will go down in history as one of humanity's biggest mistakes.
Just to be more pedantic, there is more nuance to all of that.
Nobody smart is going to disagree that LLMs are a huge net positive. The finer argument is whether or not at this point you can just hand off coding to an LLM. People who say yes simply just haven't had enough experience with using LLMs to a large extent. The amount of time you have to spend prompt engineering the correct response is often the same amount of time it takes for you to write the correct code yourself.
And yes, you can put together AGENT.md files, mcp servers, and so on, but then it becomes a game of this. https://xkcd.com/1205/