logoalt Hacker News

michaelbuckbeeyesterday at 12:10 PM1 replyview on HN

Fwiw - I did a fairly large comparison of Gemini Nano (the in browser ai model) vs a comparable free hosted model of Gemma (from OpenRouter) and the hosted model absolutely trashed the local model on every aspect of speed, reliability, availability, etc. [1]

I'm not particularly happy about that outcome as I wish we had more locally run AI models for reasons of privacy and efficiency, so this is more just a warning that at present there are some severe tradeoffs.

1 - https://sendcheckit.com/blog/ai-powered-subject-line-alterna...


Replies

kbxtoday at 5:46 AM

Hey, Chrome PM for built-in AI here.

Thanks for the write-up and the comparison, but more importantly for using the API in production!

You’re highlighting the "state of the art" gap we’re working to close. Cloud models will always have the advantage of massive parameter counts, but our bet is that for a huge class of simpler or high-volume tasks, the upsides of on-device (e.g. zero-cost, permission-less start with no quotas/infra, network-resilience, privacy) make it a compelling trade-off.

The models have been getting better at a rapid clip, and the team is heads-down on optimizing performance and reliability. To that end, we're always grateful for feedback. If you hit specific bugs, crashes, or quality regressions, filing a report with repro steps is the best way to help us improve. You can file those on crbug.com under the "Chromium > Blink > AI" component.