logoalt Hacker News

vessenesyesterday at 11:24 PM3 repliesview on HN

These videos are worth a watch. There are tons of impressive moments, but they had me at the very first one where a woman says: "I'm going to tell you a story," and then pauses for a long, luxurious sip from a cup of coffee, and the model ... does nothing, just waits. Take my money.

Speaking of taking my money, what's the economic model for a company like this? They've published a fair amount about their architecture - enough that I imagine frontier labs could implement. Patents? Trade secrets? It's hard for me to understand how you'd be able to beat that training compute and knowhow at Anthropic/GOOG/oAI/Meta without some sort of legal protection.

I can't wait to see what these model architectures do with like 30-40% lower latency and more model intelligence. Very appealing. For reference, these look to be roughly 1/10 the size of Opus 4.7 / GPT 5.x series -- 275B, 12B active. So there's lots of room to add intelligence, and lots of hope that we could see lower latency.


Replies

swyxtoday at 1:28 AM

> They've published a fair amount about their architecture - enough that I imagine frontier labs could implement.

i think the real ones know this is the tip of the iceberg? hparam tuning, data recipes, data collection, custom kernels, rl/eval infra, all immensely deep topics that would condense multiple decades of phd lifetimes to produce SOTA performance (in both senses of the word) like this.

i would also calibrate what you are impressed by. simply waiting is a posttrain thing - the fact that gemini and oai have not prioritized it is not something you should overindex on as hard. what they showed with full duplex is technically far far harder to achieve

edg5000today at 4:36 AM

In China it's become well known that promising new companies will get an offer from either Alibaba or Tencent. In the US, it's probably simmilar. Everything that's out in the open can get acquired or simply copied. Maybe that is what Thinking Machines is hoping as well?

babelfishtoday at 2:55 AM

they hire leading researchers, and leading researchers won't work for you unless they're able to publish

show 2 replies