This extends beyond AI agents. I'm seeing it in real time at work — we're rolling out AI tools across a biofuel brokerage and the first thing people ask is "what KPIs should we optimize with this?"
The uncomfortable answer is that the most valuable use cases resist single-metric optimization. The best results come from people who use AI as a thinking partner with judgment, not as an execution engine pointed at a number.
Goodhart's Law + AI agents is basically automating the failure mode at machine speed.