logoalt Hacker News

Reimagining the mouse pointer for the AI era

214 pointsby devhouseyesterday at 5:40 PM180 commentsview on HN

Comments

why_atyesterday at 7:08 PM

My first impression coming away from this is skepticism.

Anything with voice controls for routine use is a pretty tough sell. Doing this when you're not completely alone would be annoying to everyone around you.

Most of their examples seem like they could have been done with a right click drop down menu so they don't really need to "re-invent the mouse pointer".

So is this thing talking to Google's servers all the time for the AI integration? So it won't work if you're not connected to the internet? Privacy concerns are obvious; now Google wants to have an AI watching literally everything you do on your computer?

Does it cost the user anything for the LLM use? If it's free will it stay free forever? That's quite a lot to give away if they're expecting people to use it to change a single word like in one of their examples. I guess they're expecting to make the money back by gathering data about literally everything you do on your computer.

There might be a killer app for AI integration with personal computers that has yet to be invented, but this doesn't look like it.

show 11 replies
arjieyesterday at 7:24 PM

Oh interesting, this is very cool. At first I thought it was just focus-follows-mouse but it's more interesting. You have certain keywords trigger "add to prompt". Ignoring the voice functionality (which is admittedly crucial currently because other inputs currently take over focus), I've often wanted to just have a continuous conversation with the LLM as I 'point and click' (or tab over and select) at various things. Might be neat to have text input focus continue to go to the LLM where I'm typing text etc.

Sometimes I go to a different page to take a screenshot and other times I'm browsing for a file, and other times I'm highlighting some log lines. Cursor did this well, with selecting text in the terminal auto-focusing the Cursor agent textbox so you could talk to the agent and then select some text and you didn't have to re-select the original agent textbox again. The agent is a top-level function in that system not "just another app I have to switch to" to take my context with.

I have some small amount of bias because I've always felt input-constrained on computers. I have to move my hands to go places and that's exasperating. I've tried head tracking, had a vim pedal for a while, and used tiling WMs, and things like this to aid but while my vim-fu is pretty good and I function inside things very well with it, my cross-application interface isn't.

In the end, perhaps we all have our home offices with our Apple Vision Pros and we talk to them like this to maneouvre faster through our machines and get our ideas into them.

Cool research. I wonder what we'll end up with.

wffurrtoday at 10:49 AM

"Wiggle" the mouse cursor to do something - isn't this incredibly easy to trigger by accident? I remember turning on a "find cursor" mode in Windows once upon a time where it would zoom the mouse cursor on a wiggle, and it fired all the time by accident. Imagine an older person or child who is even less accurate with the mouse, too.

jiehongtoday at 10:47 AM

Now, with vim/helix key bindings:

1. select text

2. dictate action

Feels very similar to Helix's select text and act on it.

I think text selection could also be voice controlled (with a modal voice input), so one could say: "select sentence, action mode, copy and paste it in my list and remove duplicates"

footyyesterday at 9:23 PM

you can really tell the people building these tools spend a lot of time alone. I work from a home office 90% of the time and I wouldn't want this to be my workflow. I don't want to talk to my computer, I want to listen to music while I work, and I want to not sound deranged and disturb everyone around me when I am working from a coffeeshop or the open-plan office or the airport or the train or whatever.

and that's aside from the obvious privacy problems.

show 2 replies
chromacityyesterday at 7:39 PM

My reaction to the first demo (recipe) is that it was slower than typing the same thing on your keyboard.

The second demo seems to be a wash: there's no time saved in saying "move this" versus "move crab". And an app-specific contextual menu would probably be faster.

The third demo doesn't seem to warrant the use of a pointer at all, since there is only one way to interpret the prompt.

None of this means that this approach will not be successful, but there's a reason why so many attempts to revolutionize user interfaces ended up going nowhere. Talking to your computer was always supposed to be the future, but in practice, it's slower and more finicky than typing.

In fact, the only new UI paradigm of the past 28+ years appears to have been touchscreens and swipe gestures on phones. But they are a matter of necessity. No one wants to finger-paint on a desktop screen.

show 2 replies
kjellsbellsyesterday at 6:59 PM

I sense a privacy problem brewing.

It reminds me of Microsoft Recall in the sense that some portion of the screen is going to be continuously transmitted outside of the users control.

What happens when someone browses something very private (planning a surprise engagement. looking at medical data. planning a protest)? All that data gets slurped to google and subject to a warrant or discovery or building your advertising fingerprint.

Maybe the idea is that the data is sent to AI only when you right click, but that seems like a very thin firewall that a product manager will breach in the interests of delivering "predictive AI" via some kind of precomputed results.

show 3 replies
AuthAuthtoday at 12:48 AM

All these problems dont need a conversation with an AI to solve. If you make the text selectable then you can do these actions fast and efficiently. This only becomes a product as they make the web shittier and less friendly towards PC workflows.

Its wild that they even put this out as a demo. It should have been picked apart in the internal meeting. There is no way I'd ever show my product taking 5s to change a 1 to a 2 in a piece of text that the user was already hovering or taking 10s to drag and drop a line of text from one box to another. Even the image of finding a route between two images could be done quick if images were auto OCR'd which is a setting on most image viewers.

ImaCakeyesterday at 11:12 PM

I think this falls flat for a technical audience because we already know how to do this stuff. But there are a lot of people who don't know how to copy paste, or use reverse image search, or apply a filter to a table. Being able to use plain language to do these things is a game changer for them. Sure, it's inefficient and inelegant but it's an interface that will do for basic technical stuff what the ipad touch screen did for the mouse and keyboard.

show 2 replies
torben-friistoday at 12:28 AM

Do you know the deep frustration of watching a tech illiterate person use a PC? When they type google in the omnibar and click on Google.com for every search, for example?

Now you get to hear every person in the office do that around you.

Like, good tech, but do googlers live in the real world? Do they genuinely like the idea of an open office full of people talking to their computers? Do they all live alone without human contact?

juancnyesterday at 7:27 PM

Please don't.

I like text selection exactly how it is. I want precise controls.

It's fine for a touch interface like a phone, but on a computer I expect precision. As much as I can get.

tintoryesterday at 6:46 PM

Of course, it isn't a Google Demo, if you can't use it to book a table at restaurant. (shown at the bottom of the page)

show 1 reply
devhouseyesterday at 11:08 PM

What I actually would be more willing to allow is a version of this that is built into macOS, runs locally, never phones home. If Apple Intelligence made an "AI sees everything on my screen", I might turn it on.

botanriceyesterday at 8:58 PM

while these examples might be easy fodder for criticism I do feel like this whole idea of talking to an LLM across multiple applications and anything your pointer is on will give it context is pretty powerful and cool idea.

I'm imagining a webpage with a link - instead of opening a new link to quickly google something or opening three new tabs based on hyperlinks, i can point at a paragraph or line and ask it to tell me about it.

Maybe I can point at a song on Spotify and have it find me the youtube video, or vice versa (of course this is assuming a tool like this wouldn't stay locked into one ecosystem.. which it will).

Point is that the concept of talking to the computer with mouse as pointer is pretty cool and i guess a step closer to that whole sci-fi "look at this part of the screen and do something"

show 2 replies
lifistoday at 10:16 AM

Not clear what it actually does, but seems equivalent to a global right click menu with "Chat with AI about this"

gobdovanyesterday at 7:56 PM

This is how I always imagined FE development would work once ChatGPT 3 came out. Then Cursor appeared and seeing how successful they were with just a chat and a few tool calls, I thought I was over-complicating things.

Anyway, I built a prototype on this idea, but instead of relying only on hover, I press Option to select a node in a custom AST-ish semantic layer I designed around a minimalist UI grammar, and Option + up/down arrows to move to parent/child node. This way, I have have an accurate pointer to the element I want to talk about, plus a minimal context window (parent component, state, a few navigation related queries).

What I learned from using it, though, is that the killer use case isn't necessarily the flashy "talk to this UI element" interaction shown in the Google demos. I do use it that way too; I have `Option + Shift + click` to copy a selector to the clipboard, so I can give an LLM connected to the live medium a precise reference to the element I want to discuss.

But the place where it has been most useful day to day is much simpler: source navigation. Point at the thing in the UI, jump to the code that is responsible for it. The difficult part is jumping to the code you care about (the code for UI or for the semantic element?), but in my system that distinction turned out to be usually obvious, which is what makes the interaction useful.

Nathanbatoday at 2:20 AM

Kind of incredible how consistently terrible Google is at everything they do in the AI space. So they choose this demo and write a big blog post and advertise.. and the demo is horrible, doesn't work. Doesn't track what I circle with the mouse, just apparently where the mouse pointer landed at the end and only exactly where it landed. Multiple times it said "Got it, I'll move this empty space between the clouds over here" or "Got it, I'll convert this empty area to a sunhat" despite my mouse only being a few pixels next to an actual hat.

TonyAlicea10today at 2:10 AM

At the core this is recreating the right click via voice.

Interesting but not “reimagining” anything.

I think the real story here is how vibe coding now enables flashy demo sites like this to be built for a concept that hasn’t yet earned it.

vicentwutoday at 9:48 AM

Good work! Context-awareness has huge potential. I don't think this demo hit the right mark, but it definitely shed some light.

walrus01yesterday at 10:09 PM

Can I use this AI mouse pointer to tell the difference between hotdog or not hotdog?

show 1 reply
__MatrixMan__yesterday at 9:09 PM

I've been iterating on some 3D models for various wacky garage projects I have. It's fun. I've often wished I could click on an arbitrary place and say "add an eye bolt here" or somesuch.

Of course learning proper cad software is probably the right thing here, but having Claude write python scripts which generate HTML files which reference three.js to provide a 3d view has gotten me surprisingly far. If something could take my pointer click and reverse whatever coordinate transforms are between the source code and my screen such that the model sees my click in terms of the same coordinate system it's writing python in, well that would be pretty slick.

show 1 reply
jpattenyesterday at 6:57 PM

Reminds me of Put That There https://m.youtube.com/watch?v=RyBEUyEtxQo

show 3 replies
iamcalledrobtoday at 6:31 AM

Nitpick, but it bothers me: The human factors of their demo video don't stack up.

Horizontal dragging with a mouse is actually really hard. Nobody's going to use it like that.

Your arm can easily move your hand and cursor up/down by pivoting your shoulder, but there's no mechanism for left/right movement. It's always an arc.

Or put another way: selection will be a lot slower and more tedious than the demo.

AbuAssaryesterday at 6:16 PM

so Google will be monitoring whatever on the screen continuously or only when the user say the magic words (this, that, here, there)?

show 2 replies
nolist_policyyesterday at 6:55 PM

Wiggle at CAPTCHAs, wiggle at Termux, wiggle at Emacs, wiggle at the Godot Editor, wiggle at my remote desktop.

(Not going to happen)

grumbelbart2today at 6:44 AM

What is going on with the font in this article? My Firefox renders it as a wired mixture of lower- and upper-case letters, all with the same height. Completely unreadable. Culprint seems to be this:

    font-feature-settings: "ss02" on;
loaderchipsyesterday at 6:20 PM

It's beautiful how the human mind can take something very obvious but overlooked and make it into this fantastic innovation. Fab stuff.

maheenaslamyesterday at 7:34 PM

The concept is good but accuracy in cluttered environment can be a concern, also misinterpreting context can be a problem

chamomealtoday at 3:52 AM

Kinda related, but this reminded me of the guy who made a voice controlled text editing language. It's kinda like vim with your voice. Super cool talk here:

https://www.youtube.com/watch?v=NcUJnmBqHTY

altern8today at 7:05 AM

I can type pretty fast, so this seems like it would slow me down. Also, everyone in the office would get annoyed pretty quickly.

thih9today at 8:04 AM

I guess Apple is taking notes and launching a version of that later in visionOS.

SilverSurfer972today at 6:18 AM

If we want to close Human <-> Machine loop as much as possible(pre-neuralink).

Assuming that today the most efficient way for human to transfer information to machines is via voice. Assuming for machines to convey rich information to humans that's by printing html.

Then a combination of screen + eye tracking + voice is all you need. The mouse doesn't make sense anymore.

Links: https://x.com/trq212/status/2052809885763747935

vjvjvjvjghvtoday at 3:24 AM

I think combining voice and mouse will be perfect for a lot of design work like photo editing or CAD. Like pointing at an edge and saying “put a 2mm chamfer here”. To me this would be a really nice workflow c

robot-wrangleryesterday at 9:08 PM

A zigzag merge gesture is obviously a terrible idea until/unless everything is a touch screen. Did they even think about this stuff at all? Ergonomics and RSI aside, if a horizontal drag means add, why not just make vertical drag mean merge. Not a fan of voice interaction generally, but it's something we'll all be grateful for as we get older. No need to accelerate it

goosejuiceyesterday at 11:39 PM

I don't see how the examples given are much better than just natural language. With support for chaining multiple thought+capture then this could be pretty expressive and a nice accessible tool. I could see how eye gaze could be incorporated as well.

dandakayesterday at 7:40 PM

Next generation of OS should have constant video and audio recognition by on device LLM. This will provide valuable context for a lot of scenarios. So instead of frequent copy-pasting we are used to, we can let agents access context of our whole workflows from different apps.

But Google is a very ill positioned candidate for such OS. I would rather trust Apple and local-first on-device models.

show 1 reply
ianbickingyesterday at 8:50 PM

I've been doing something similar to this in a personal claude code frontend, though not particularly "magical".

I'm mostly using my system to make comments on long AI-generated documents (especially design documents). I find it works well to have the AI generate something, and then I read through it, making comments along the way.

You can get pretty far just repeating the things you see... "I'm reading [heading] and [comments]". But I do find some use in selecting content and saying "I don't agree with this" or whatever else.

The result is just an augmented message. It looks like:

    <transcript>
      Let's see what we've got here.
      <selection doc="proposal.md" location="paragraph 3">
        The system already...
      </selection>
      No, I don't like how this is approaching the problem, ...
    </transcript>
Then I just send this as a user message. Claude Code (and I'm guessing any of the agentic systems) picks up on the markup very easily. It also helps to label it as a transcript, as it can understand there may be errors, and things like spelling and punctuation are inferred not deliberate. (Some additional instruction is necessary to help it understand, for example, that it should look for homophones that might make more sense in context.)

It makes reviewing feel pretty relaxed and natural. I've played around with similar note taking systems, which I think could be great for studying in school, but haven't had the focus on that particular problem to take it very far.

But I think the best thing really is giving the agent a richer understanding of what the user is experiencing and doing and just creating a rich representation of that. The keywords can be useful, but almost only as checkpoints: a keyword can identify the moment to take the transcript and package it up and deliver it.

One difference perhaps in design motivation: I have really embraced long latency interactions. I use ChatGPT with extended thinking by default, and just suck it up when the answer didn't really require thinking. I deliver 10 points of feedback at once instead of little by little. (Often halfway through I explicitly contradict myself, because I'm thinking out loud and my ideas are developing.) I just don't stress out about latency or feedback, and so low-latency but lower-intelligence interactions don't do it for me (such as ChatGPT's advanced voice mode, or probably Thinking Machine's work). I think this focus is in part a value statement: I'm trying to do higher quality work, not faster work.

show 1 reply
ungreased0675today at 4:05 AM

To me, this is a bizarre and weird way of using a computer, but I’m glad they’re doing research and trying new things.

ameliusyesterday at 10:00 PM

Reimagine the chat interface first. For example, let the user click where the LLM went off the rails.

kixiQuyesterday at 10:12 PM

I'm pretty sure all these models have terms of service that make the user assert that they have permission to use the content you're feeding into them (clickwrap infringement-is-the-user's-fault). This kind of integration makes a mockery of that.

hmokiguessyesterday at 7:17 PM

Don't build these things, instead build protocols and expose system level APIs for application developers to build things.

jaccolayesterday at 6:51 PM

This seems like one of those things that is usable infrequently enough to be forgotten/poorly developed/never used. (Even before accounting for the actual failure rate of the LLM which will be none-zero).

Perhaps a text box and file upload isn’t the perfect interface for every use case but it is versatile which is a huge barrier to overcome.

1970-01-01yesterday at 8:52 PM

How about you give me my normal white cursor and an "AI enhanced" orange cursor only when I'm doing AI things. To use their words, that would be "intuitive AI that meets users across all the tools they use, without interrupting their flow"

RamblingCTOtoday at 9:02 AM

What a load of horsecrap. Google was never good at usability or UX. But that's a new low. This is ambiguous as it gets and good UX is opposite of that. If I need to undo half the stuff that happened or an AI starts to do stuff I don't want ot because I am moving my mouse in a certain way I'd just get angry and turn it off.

lofaszvanitttoday at 9:55 AM

There must be some gargantuan blackhole around there that kills creativity.

imdsmtoday at 6:59 AM

Actually don't hate the concept

chatmastatoday at 12:00 AM

I’m having flashbacks to Windows 7 gadgets. I can already imagine some developer marketplace for creating cursor prompts.

iridioneyesterday at 6:36 PM

Interesting! I wonder how UI will evolve in the long-term? If there are browser-use/computer-use and clicky-clones automating pointer actions, do we really need complex UI anymore? If yes, when?

show 1 reply

🔗 View 36 more comments