I work with people who generate solutions without really looking at what was produced (group A). They click around the app or run some tests and decide if they're content with the result, then ship it. You can see Claude's fingerprints all over the PR and it's safe to assume they didn't change much of anything.
Then I have coworkers who work through the problems, build harnesses to test the changes and verify results, work through multiple solutions, synthesize ideal outcomes into a single one, benchmark, refine, test the result thoroughly, and provide sane verification processes in the PR. This is group B.
They're entirely different versions of using AI. One seems passable for now (look how fast we're going!), and the other is arguably a new version of what's possible (in a given time frame at least) and defines a totally new normal for software engineering that I virtually never saw outside of exceptionally professional contexts. You don't move as quickly as group A, but you still move faster and produce better software than most people have in virtually every company I've worked for.
I see group A being totally pushed out of the field fairly quickly. LLMs let you work incredibly effectively if you care to learn how. That kind of rigor is going to be the default (group B), and might become the only way humans can still be a useful component in the loop. Group A is likely to become replaceable with frontier models before very long.
Bro group B might as well write the code themselves. this is getting silly