logoalt Hacker News

Interaction Models

236 pointsby smhxyesterday at 8:53 PM27 commentsview on HN

Comments

monkeydusttoday at 11:41 AM

This does feel like where things should be going for more natural human-AI interaction patterns. Nice write up and demos.

vessenesyesterday at 11:24 PM

These videos are worth a watch. There are tons of impressive moments, but they had me at the very first one where a woman says: "I'm going to tell you a story," and then pauses for a long, luxurious sip from a cup of coffee, and the model ... does nothing, just waits. Take my money.

Speaking of taking my money, what's the economic model for a company like this? They've published a fair amount about their architecture - enough that I imagine frontier labs could implement. Patents? Trade secrets? It's hard for me to understand how you'd be able to beat that training compute and knowhow at Anthropic/GOOG/oAI/Meta without some sort of legal protection.

I can't wait to see what these model architectures do with like 30-40% lower latency and more model intelligence. Very appealing. For reference, these look to be roughly 1/10 the size of Opus 4.7 / GPT 5.x series -- 275B, 12B active. So there's lots of room to add intelligence, and lots of hope that we could see lower latency.

show 3 replies
alyxyayesterday at 10:35 PM

The noteworthy things to me are that the architecture is a transformer that takes in text, image, and audio input and produces text and audio output, all trained together, and it works in near real-time through interleaving inputs and outputs rather than pure generation of the output from a given prompt.

> Time-Aligned Micro-Turns. The interaction model works with micro-turns continuously interleaving the processing of 200ms worth of input and generation of 200ms worth of output. Rather than consuming a complete user-turn and generating a complete response, both input and output tokens are treated as streams. Working with 200ms chunks of these streams enables near real-time concurrency of multiple input and output modalities.

That's probably the main thing that distinguishes it from the multimodal models from other frontier labs as far as I can tell.

show 1 reply
rohitpaulkyesterday at 10:00 PM

Aside from how impressive the model is, the demos here are very well done! Quirky and short, unlike what we're used to from Anthropic and OpenAI.

tedsandersyesterday at 10:34 PM

Very cool! The demos felt fairly contrived - e.g., count things while I talk. I wonder what more useful or commercial applications look like.

show 2 replies
lostathometoday at 7:34 AM

This looks similar to things people are already building locally with Gemma4 and TTS; just a bit fancier.

Local models will catch up soon.

abhik24today at 3:51 AM

Very cool demo, I wonder what would be the billion dollar applications of a thing like this.

nasreddintoday at 3:59 AM

Very cool tech. I think people are underrating how this will be used.

suriya-ganeshyesterday at 10:07 PM

incredibly impressive demos. I wonder how the training data for these models look like?

is it separate batches of special "skills" that are added post training? how can they guarantee the models won't eventually lose a skill?

kburmantoday at 5:04 AM

Simultaneous speech is best.

emsignyesterday at 10:10 PM

That's neat and definitely the next step. But to be honest, I don't want an AI talk to me like that.

show 1 reply
Nimitz14today at 3:48 AM

Really really cool. If they can serve this efficiently it would disrupt a lot of things.

zuzululutoday at 4:16 AM

am i the only person not impressed by this ? it just feels akward still with pauses and doesnt openai offer voice cadence already

show 1 reply
Ozzie-Dtoday at 10:05 AM

[dead]

modelesstoday at 3:34 AM

This deserves to be at the top of HN, shame it seems like it's not going to make it. Some of the demos are hilarious. Clearly having the model appropriately choose when to speak is a major thing that has been missing from voice models to date. It seems like the latency is still a touch too high to be truly human-like though.