This might be worth it if Gemma4 E2B were a good model, but honestly it's absolutely useless in all our testing without further training and finetuning, and those aren't usecases that are fit for normal web browser use such that one would care to support it by adding such overly broad and expensive infrastructure to make it happen.
Gemma 4 E4B is a much better model, but it's too large to simply download and run everywhere.
IMHO, this is jumping the gun. Google's going through a lot of effort to release a model that will give everyone a very poor first impression of what on-device models are capable of, souring it for everyone for a long time afterwards. It would be better to wait until a smaller, better model ships before doing this.
> Google's going through a lot of effort to release a model that will give everyone a very poor first impression of what on-device models are capable of, souring it for everyone for a long time afterwards.
I wonder what that will do for the competition between hosted genai and local models...
Most users aren't even going to know that this is here. Web developers will expose this capability to the user. The devs will have to determine if the model is delivering what they need.
It's good to have something to work with if these Web APIs are going to be part of a standard. I suppose this means that ALL the browser vendors are likely to implement something