> it is cutting jobs to offset its A.I. spending, saying last month that it would slash 10 percent of its work force.
> Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.
Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.
It will get really funny when they start imposing an exact number of tokens as a quota, where too little means you are an outdated luddite and too much is inefficient and wastes money
A funny Goodhart’s Law parallel showed up in during GPT-5.1 training, where the model was rewarded for using the web search tool, so it learned the behavior of superficially using web search to calculate “1 + 1” and not utilize the result.
> that's a dumb metric whether or not you're using it to evaluate employees
Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?
I'm reminded of the sales dashboard that tracked the number of calls each sales employee made. There was one employee in 1st place that I assume just always called the same customers multiple times. Her position was about 10x 2nd place.
If someone gave me unfettered access to inference of modern LLMs, there would be no concept of measurement other than the total system wide capacity of whatever the company had available.
Not that I disagree with you, but I’ve heard of such tactic being used in some orgs at both Google and Microsoft as well.
It seems like a common conclusion from a management that wants to push for AI adoption. I doubt it’s super effective, but we’ll see how it turns out.
This is a company incentive to increase expenses. Maybe not as bad as Dilbert's "I'm gonna myself a new minivan," but still.
[dead]
My company did something similar (dashboard to track tokens). It was made available to managers about two weeks before it was available to everyone, so I got to see all my reports' usage before they knew they were being tracked.
The dashboard got announced publicly and just about everyone's usage went up by 100%-200% almost immediately and hasn't come back down, but nothing I'm tracking shows any increase in output since then. We absolutely saw productivity gains a few months ago, but it feels like now people are just burning tokens for the sake of it.
On top of that, as a reaction to the rising costs, we've now gone from unlimited token use to every engineer now having a monthly token budget of $600. I get why that was done, but we're a publicly traded US tech company worth 10s of billions of dollars. We're not hurting for money and the knock on effects are just crazy. For example, I had an engineer in sprint planning say about a large migration type ticket, "Can we hold that ticket until the end of the month? I don't want to burn through all my tokens this early in the month." I just cannot imagine that that's the culture that our executive team was trying to cultivate when they first purchased these tools.
I'm not anti-AI and actually really enjoy using AI for development, but over and over I've watched business leaders shoot themselves in the foot trying to force more AI use on their employees in pursuit of ever increasing productivity. I just keep thinking that there's no way that any productivity gains we've seen from the forced, tracked AI usage are enough to offset the productivity lost from anxiety and churn caused by the unrealistic productivity expectations, vanity metrics, and mass layoffs that have come along with increased AI adoption.