No thanks
It only took Google and their AI offering to come up with Graffiti.
There must be some gargantuan blackhole around there that kills creativity.
I spent quite some effort to _completely_ get rid of mouse usage in my computer workflows and I believe it paid off.
Both of the text based demos would have been simpler and faster with traditional mouse and keyboard interactions. What is the AI adding?
Haha, April Fools! Good one.
Wait…it's May. Ugh, I'm so confused. :spiral eyes emoji:
I like the idea of a touch screen that you don't touch. Just a few centimeters(variable) off would be fine - they do exist but I've never used one. I think what's required is a way to slightly flex one finger to activate cursor movement, then pinky/thumb twitch to button press. Maybe wear two magnetic silicone nail bands? Current air-touch technologies seem complex and power-hungry, I don't know.
Would be tiresome though to hold hand out all day - but good for mobile and handwriting/drawing. Need zero latency.
Furthermore, the mousepad could become the magnetic sensor and not the screen in order to rest the palm. The nail bands then become the equivalent of the mouse so it's a hall effect mouse. But could the pad detect finger twitch for the buttons, though?
The image editing demo was fun... the model is not very well censored.
I wonder what sort of monstrous power would be unleashed if Google used Plan9 as a foundation.
do not want
50yr old tech (mouse) still hangs around because it is effective. Whatever this Google slop is, will likely not replace mice. The mouse has been successful at passing the test of time. Most of Google's own hardware products don't last 1/20th of that time.
There's a reason chairs are still around. They are +2000 years old. Its still waiting to be replaced.
One should be extremely skeptical of claims of replacing tech that has been around for a very long time.
Google needs to beat OpenAI and Antropic in coding models because that's where the big money is going. I love using the Gemini pro model for quick questions, but that's not where I'm spending the real money.
They have so many great software engineers but unable to use them to speed up coding AI research. Hopefully with Sergey's focus it will get better.
This cursor thing is just another experiment nobody cares about.
Just few days ago, I found myself marking a part of my screen with rectangle and pasting what's there into chatGPT to find out more about what was referenced there.
This has a good utility.
Their video demo is interesting. If that was to be useful, it would need to work on sites like Netflix. And for that to work, they would presumably have to axe drm. I am fully in favour of removing the pointless energy tax we pay as a society for the highly flawed and ineffective system of video drm.
Unless of course, their AI gets the same special privileges as the gpu in accessing drm content, and everything else is still locked out.
Just seven hours ago there was a plea on HN [0] to please not do this. Seriously, what are they smoking at Google right now?
I tried it in ai studio and it was extremely disappointing. It did not follow the directions. I pointed at the door of the sand castle and said make the door of the sand castle bigger. it created another very big sand castle on the side. Then i pointed at one flag of the castle and said turn this flag into a blue flag. It turned two other random flags to blue but the one i pointed the pointer at.
Like a dream come true...
Nightmares are dreams as well and this is a nightmare like Windows Recall.
Technically wonderful though.
> We’ve been exploring new AI-powered capabilities to help the pointer not only understand what it’s pointing at, but also why it matters to the user.
We couldn't quite track you well enough before. So we're fixing that under the guise of "AI powered capabilities."
That example with the recipe is funny. Did they really need AI to copy two lines and then compute 2×1?
being able to make precise edits would be huge for AI
don't mess with the mouse
Google made a Microsoft Kinect.
Its like watching a demo from the old Xerox PARC, except everybody has only bad ideas. Like an opposite Xerox PARC.
Nice, cute, silly little feel-good demo so that we can all pretend like we’re all going to be making decisions and micro-managing AIs by pointing at things in 5 years. It’s going to be great! The future is bright!
I don't understand why we need to move from an explicit operation like, say, circling something, to a fuzzy one where you have to hope the machine understands what you're pointing at.
I also don't think people want to constantly talk to their computers.
This is pretty neat
Maybe I'm misunderstanding, but what is new about the pointer itself? Seems to be functionally the same as selecting + tooltips / context menus.
really interesting! This change now fits a faster UX and use.
There's already a product that does this lol
Aaaaand now I can't remember the name of it
please leave the pointer alone. Hes been with us so long without enshittification.
Thanks, I hate it
[dead]
[dead]
[dead]
Interesting! I wonder how UI will evolve in the long-term? If there are browser-use/computer-use and clicky-clones automating pointer actions, do we really need complex UI anymore? If yes, when?