logoalt Hacker News

nltoday at 12:21 AM1 replyview on HN

> None of these companies, be it Anthropic, OpenAI, xAI, Google, Meta, Microsoft, are profitable in the AI department,

Citation needed.

All reporting is that they are profitable on the inference side and all the VC money is going to building more data centers to run more inference. (Note that the coding subscription models are probably only break even on average - the money is in the API)

> The Chinese models are keeping up with them, while offering the models for free and able to run on consumer grade hardware, and more importantly they train them for cheap.

No one is running DeepSeek v4 (a 1.6T token model) on consumer hardware.

They aren't much cheaper to train the US models. Training is subsidized by the big Chinese tech companies. They are slightly cheaper because they are smaller (and weaker) models than the 5T and 10T models the US frontier labs are training, and the US labs are paying for a more diverse set of RL data (which shows up in diverse benchmark performance).

> we just saw SORA shut down because it was bleeding money far too fast while the Chinese released video models that far surpassed it back to back to back...

Ironically this proves the point.

OpenAI didn't shutdown Sora, just the subscription version and weird social network thing. You can still access it via API.

The Chinese models are API models and probably just as profitable for them as the LLMs are for the US frontier labs.

[1] has prices for video models. There is a big range, but Google's Veo model and OpenAI's Sora are around the same price as the Chinese models.

[1] https://openrouter.ai/models?output_modalities=video


Replies

strange_quarktoday at 1:02 AM

What does profitable on inference mean? As far as I can tell, none of these companies have rigidly defined it, let alone it being a GAAP number. And yeah, if you subtract out all your R&D, payroll, sales, marketing, and other overhead, and get someone else to take on the debt or dig into their free cash flow to build the hugely expensive infrastructure on which you depend, it'd be pretty hard to not be "profitable". It's almost humorous how dumb of a metric "profitable on inference" is.

Ask yourself if AI was so profitable, why don't any of the big hyperscalers break out AI revenue in their earnings. OpenAI and Anthropic both project huge losses for the next couple years, it's not hard to find.

The real problem is, as the GP comment pointed out, that they can never stop training. As long as they're committed to building these behemoth models, the second they stop training, someone else will catch up and everybody will switch over because it's trivial to do so.

show 1 reply