People have measurably lower levels of ownership and understanding of AI generated code. The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.
In essence, we get the output without the matching mental structures being developed in humans.
This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.
> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
> https://arxiv.org/abs/2506.08872
> https://www.media.mit.edu/publications/your-brain-on-chatgpt...
> The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.
The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).
Why does everyone who hears "AI code" automatically think "vibe-coded"?
While I agree with this intuitively, I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc.