It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them. The fundamental issue is that there really aren't many convincing demonstrations that hold outs can relate to and there remains basically no evidence of real value gained.
Users attest to higher productivity and point to material but intermediate factors like token use, generated lines of code, pr counts, etc, but there doesn't seem to be a convincing revolution in the quantity or quality of mature software being delivered.
Combine that puzzling impressions of outcomes with a sense, for many, that they don't feel like they have a personal problem that warrants a new tool, and you end up with a pretty earnest and defensible indifference.
To get hold out engineers using AI, the industry needs to be focused on demonstrating relatable workflow improvements and demonstrating practical improvements to finished work product. Instead, policies like token use incentives just rely on luring them into pulling the slot machine handle with the expectation that once they do, they'll join the cadre of other converts who justify their transition with subjective improvements and intermediate metrics.
Unfortunately, a convincing demonstration to convince a skeptical colleague would require measuring developer productivity.
Among skeptics, I've only seen people won over by using it themselves, because when they use AI for their own work, they invest the time to review the code, understand it, and assess its quality by their own standards. That's how people learn to trust AI coding assistance.
Here's one selling factor from the experience I'm experiencing right now:
Others will use AI, and it will make your life miserable. You need to know enough about AI to be able to fight back.
The experience: one employee, self-selected, assigned themselves to a task of configuring integration with MySQL HA deployment. They produced a mountain of code in a short month (we are talking about close to a hundred thousands lines of Python code). And they decided to go with Oracle's tools, instead of Galera...
Everything this employee produces is, quite obviously, AI-generated. Also, in the initial stages, they worked on their project completely alone: no reviews. To give some sense of size of this insanity: one of the configuration scripts I'm working with now is a 9K+ loc of Python that's supposed to run from `mysqlsh`. About half of it is module-level variables.
It will take many months to restructure this "prototype" by hand. It's a pain to read and to navigate. GitLab UI has a perceivable lag just trying to display the script, forget about diffs. I will absolutely need AI to try to make sense of it (I'm not allowed to fix it). But, and if it ever comes to fixing, I can't imagine this to be done without automation of some sort.
Unfortunately, AI generates problems that, sometimes, only AI can fix. :(
> It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them
I have. My conclusion is... humans are deeply irrational when it comes to rapid change.
Egg or olive oil prices spike, humans out an entire government.
The rate of immigration spikes, humans throw them into camps and break useful treaties.
Most of the resistance I've observed amongst engineers is resistance to change generally.
And then digging in when challenged.
Software engineering organizations have agreed for decades that a meaningful measure of developer productivity is a literal impossibility.
So now introduce AI and then tell every developer that they need to be 20% more effective 20% of what?