logoalt Hacker News

giwookyesterday at 4:44 PM6 repliesview on HN

Lots of us have noticed that usage limits for Claude have been nerfed in recent weeks/months.

If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.

The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.

This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.


Replies

dualvariableyesterday at 5:18 PM

However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.

There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.

Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]

show 4 replies
hirako2000yesterday at 6:24 PM

It does feel like the music is about to stop.

It has been years now, of cash injections, investors can't keep feeding the beast forever.

show 2 replies
stefan_yesterday at 8:41 PM

Dunno, if in this day and age you are making inference more expensive, more scarce, you are honestly moving in the wrong direction and DeepSeek and others will gladly take your lunch.

show 1 reply
sergiotapiayesterday at 11:57 PM

That is folly because there is very minimal cost to switching providers, let alone models.

glicvdfhsdfyesterday at 10:23 PM

[dead]

bluescrnyesterday at 6:25 PM

Did anyone really expect AI to be cheap?

If/when it gets to the point where it can replace a skilled worker, the service can be sold for close to the same price as that skilled labour. But the AI can run 24/7, reliably, and scale up/down at a moments notice.

There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best.

show 4 replies