> AI-users thus become less effective engineers over time, as their technical skills atrophy
Based on my experience, I think this will prove more true than not in the long run, unfortunately.
Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.
My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.
We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.
> My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth.
I notice many of the adults in my life are doing this now as well.
> Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.
Related recent article posted on HN - https://news.ycombinator.com/item?id=47913650
I work with people who generate solutions without really looking at what was produced (group A). They click around the app or run some tests and decide if they're content with the result, then ship it. You can see Claude's fingerprints all over the PR and it's safe to assume they didn't change much of anything.
Then I have coworkers who work through the problems, build harnesses to test the changes and verify results, work through multiple solutions, synthesize ideal outcomes into a single one, benchmark, refine, test the result thoroughly, and provide sane verification processes in the PR. This is group B.
They're entirely different versions of using AI. One seems passable for now (look how fast we're going!), and the other is arguably a new version of what's possible (in a given time frame at least) and defines a totally new normal for software engineering that I virtually never saw outside of exceptionally professional contexts. You don't move as quickly as group A, but you still move faster and produce better software than most people have in virtually every company I've worked for.
I see group A being totally pushed out of the field fairly quickly. LLMs let you work incredibly effectively if you care to learn how. That kind of rigor is going to be the default (group B), and might become the only way humans can still be a useful component in the loop. Group A is likely to become replaceable with frontier models before very long.