logoalt Hacker News

33MHz-i486yesterday at 8:46 PM10 repliesview on HN

I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.


Replies

tiffanyhyesterday at 9:15 PM

That’s what’s needed when you go from $9B in ARR … to $30B in ARR literally just one quarter later.

That kind of insane growth & demand is unprecedented at that scale.

https://www.anthropic.com/news/google-broadcom-partnership-c...

show 2 replies
mrandishyesterday at 11:01 PM

> suddenly model quality is back up again.

I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.

They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.

show 2 replies
data-ottawayesterday at 10:13 PM

It takes me 6 minutes minimum to get a response in the last 3 days, I don’t think model capacity is better.

show 1 reply
Sol-yesterday at 9:56 PM

Perhaps the adversity of the contracts cancels out with their sudden success and increase in valuation and it ends up a wash compared to the counterfactual scenario where they would have speculated on high growth early on.

ux266478today at 12:35 AM

They should probably look at moving away from general purpose hardware for their actual products, and reserve GP hardware for RnD. You don't need frontier nodes to run circles around GPGPUs, an ASIC made with 28nm is more than enough to embarrass an H100 (and much cheaper)

AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.

dakollitoday at 4:45 AM

another llm user exhibiting the behavior of a gambling addict "That table is cold this week" "that machine is't hot, you wont win on it" Every month llm users are coming up with reasons why they think the model is under preforming from some occulted reasoning only they perceive. Maybe you're just frying your brain by using llms the way you do.

scoottoday at 12:29 AM

> suddenly model quality is back up again

Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem

elAhmoyesterday at 11:14 PM

You really think that for companies of this size, signing a contract would immediately reflect in you as end user noticing improved model quality?

Onavoyesterday at 9:00 PM

Well to a certain extent it also blunts competition, Gemini is less of a threat if their main investor is also backing Anthropic. The issue is when the pyramid scheme collapses...

show 2 replies