The video they show (which is probably exaggerated by cutting out LLM generation time) is pretty sci-fi. I don't know how it works in practice, but it looks fun to try out. If this could run locally, I'd love to have a feature like that.
Most people don't really seem to care about data collection when it comes to AI usage. A lot of people who will feed Gemini/ChatGPT/Bing/Claude/shady clusters across the internet for bargain bin prices/Mistral every detail of their lives will probably be fine with Gemini as long as it doesn't interfere unnecessarily.
> Most people don't really seem to care about data collection when it comes to AI usage.
That assumes you intended to use AI. People are going to accidentally upload random private content to google.
It probably works similar to how Gemini works in Android for a while now.
You can point or select anywhere on the screen and it understands and searches the context. If you select a text block, even text inside an image, it allows to copy or search the text online. Otherwise it can search the image.
I use it often. It's intuitive and fast even on non-flagship phones.
I'd wager their A/B tests went well enough to warrant a port from phones to their new "Chromebook".