logoalt Hacker News

pydryyesterday at 1:41 PM7 repliesview on HN

Jevons paradox only applies if demand hasnt already been saturated.

The fact that public LLM usage is leveling off at a price of $0 and Jensen "we make the shovels in this gold rush" Huang is rather desperately claiming that you need to spend $250k/year in tokens to be taken seriously suggests that demand saturation may not be that far off.

Whether Jevons' Paradox applies to software engineers I think is another open question. Im constantly being told that it doesnt and that LLMs make half of us redundant now, but Im skeptical - so much automation I see is broken or badly done.


Replies

raincoleyesterday at 2:28 PM

It is quite hard to imagine how the demand is saturated now. I think any company that uses a sliver of AI will happily increase their token consumption 100x if it's free.

show 2 replies
Marha01yesterday at 3:13 PM

Demand for top models is definitely not saturated, at least when it comes to programming. If I could afford to use 5x more Claude Opus 4.6 tokens, I would!

show 2 replies
zozbot234yesterday at 8:58 PM

> The fact that public LLM usage is leveling off at a price of $0

Tne price is very much not $0, even 'free' models have usage capacity limits that equate to a shadow-price.

adventuredyesterday at 2:03 PM

LLMs haven't remotely begun to be integrated into the lives of the typical person. Not even close. The typical person is using LLMs not at all as it pertains to their daily life tasks. They're using them almost entirely for limited discussion matters (eg having a discussion with GPT about a medical issue, or a work related matter).

This is the first or second inning in the LLM rollout. It'll take 15-20 more years for full integration of AI agents into the life of the typical person.

The claw experiments for example can just barely be considered alpha stage. They're early AI garbage unfit for the average person to utilize safely. That new world hasn't gotten near the typical person yet.

The compute requirements to get to full integration of AI agents into the life of the average person - billions of them - is far beyond 10x where we're at now.

show 2 replies
vonneumannstanyesterday at 3:52 PM

Pretty sure the entire markets for Storage, HBM, DDR5, etc are completely sold out for next several years. How is that saturated?

kmeisthaxyesterday at 2:37 PM

I thought we were going to hit token saturation years ago, but they keep inventing new ways to use tokens. Like, instead of asking a chat model to write something and getting ~1000 tokens out of it, you now have an agent producing ~10,000 tokens - or, worse, spawning 10 subagents that collectively burn ~100,000 tokens. All for marginally better answers with significantly higher compute usage.

Personally, I would have used all those tokens to generate synthetic data for IDA (iterated distillation and amplification) so that the more efficient 1000 token/answer chat model can answer more questions, but apparently that doesn't justify an insane datacenter buildout.

show 2 replies
Analemma_yesterday at 3:01 PM

We’re not even close to demand saturation with tokens. Have you seen the people rending their garments with rage that Anthropic and Google won’t let them use their flat-rate subscriptions to burn millions of tokens per hour on OpenClaw? And that’s a tiny set of die-hard tinkerers.

The ceiling of token use when everyone has something akin to OpenClaw just running as a background process on their phone is way higher than there’s supply for right now. Jevons paradox is still in full force.

show 1 reply