That could explain the glut of AI hype on HN. Some people think it's magic, when it's just creating a lot of barely-functional slop. If they actually looked at the code it creates, they probably wouldn't be shouting about it from the rooftops. It almost seems like AI has its own "reality distortion field".
I often give the AI a task to produce some code for a specific thing. Then I also code to solve the same problem in parallel with the AI. My solution is always 1/4 the code, and is likely far easier for another real human to read through.
I also either match or beat the AI in speed, Claude seems to take forever sometimes. With all the coddling and revisions I have to do with the AI, I'm usually done before the AI is. It takes a non-negligible amount of time to think through and write down instructions so the AI can make a try at not fucking it up - and that's time I could have used for coding a straight-forward solution that I already knew how to produce without needing to write down step-by-step instructions.
In my experience, it's definitely faster to do manually if it's something that you know well. What LLMs enable is to skip research and learning by producing usable code immediately.