It is a small model, so what utility can I / Google expect from it? What is the on-board model used for?
It's based on Gemma 3n, and it's not the best.
I find it works fine for simple classification, translation, interpretation of images & audio. It can write longer prose, but it's pretty bad.
It can also write text in the format of a JSON schema or regexp for anything you might want to do with structured data.
I find models of this size (not tested this one specifically) at being very good at simple data extraction from user input. Think about things like parsing date and time of an event from a description or parsing a human-typed description of a repeating event rule.
this is considered a large model. i think you might be surprised how many "small" models chrome has already pulled down on your disk.
but to answer your question: one of the services that uses a small model: PermissionsAIv4
""" Use the Permission Predictions Service and the AIv4 model to surface permission notification requests using a quieter UI when the likelihood of the user granting the permission is predicted to be low. Requires `Make Searches and Browsing Better` to be enabled. – Mac, Windows, Linux, ChromeOS, Android """
I ran a fairly large production test of this and on _every_ measure except for privacy it was worse than a free tier server hosted LLM.
Not happy about that as I would like to see more local models but that's the current state of things.
https://sendcheckit.com/blog/ai-powered-subject-line-alterna...
> It is a small model, so what utility can I / Google expect from it?
Precedence for shipping models alongside consumer software.
Potentially without consent if it truly is a silent install.
Something to do with serving more ads. My guess is they will use this to “better target” or to drain more information from you for their ads.
It's not a very good small model to be honest.
That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.
Its a good idea to run small models locally if your computer can host them for privacy and cash saving reasons. But how can you trust Google to autoinstall one on your machine in 2026? I just couldn't do it.