It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search. The main win is that it's free and privacy preserving, and (mostly) transparent to users in that they don't have to do anything, which is great for giving non-technical users local inference without making them do scary native things.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
It's a one-time download shared by all websites that use the Prompt API.
What's a bigger issue is that the models on most standard PCs are both tiny and slow. I was going to try using the Prompt API to change the text of (infocom) text adventures on the fly. But for many PCs, this will currently be too slow to be feasible.
> But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
> operating systems start reliably shipping their own prebaked models
Here's to hoping that that dystopia will never happen.
> It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
also, any idea what the on disk impact is?
Is it actually privacy preserving? Chrome mostly exists to extract all the information from a user it can without immediately getting a lawsuit of greater penalty than what is gained through ads, military contracts, etc. Android isn't too far off either. I would welcome any alternative to this. I can see applications for this being things like "while device is at rest and charging summarize all of the users recent text communications" or whatever else as a legal loop hole for wiretap laws
> That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
Maybe the next big thing will be some software subscription premium offers with a bunch of 5090s as an extra.