I'm building something that fixes this exact problem[1].
The landing page doesn't advertise it yet, but essentially, I give agents a small set of tools to explore apps' surfaces, and then an API over common macOS functions, especially those related to accessibility.
The agent explores the app, then writes a repeatable workflow for it. Then it can run that workflow through CLI: `invoke chrome pinTab`
Why accessibility? Well, turns out that it's just a good DOM in general. It's structure for apps. Not all apps implement it perfectly, but enough do to make it wildly useful.
[1] https://getinvoke.com - note that the landing page is targeted towards creatives right now and doesn't talk about this use case yet
If you're on macOS and interested in this space, I highly recommend you open up the system-provided Accessibility Inspector.app and play around with apps and browsers. See how the green cells might guide an LLM to only need to read/OCR specific parts of a screen, how much text is already natively available to the accessibility engine, and how this could lead to really effective hybrid systems - not just MCPs, but code generators that can build and run their own scripts to crawl your accessibility hierarchy for your workflow!
I think this is very fertile ground - big labs need to use approaches that can work on multiple platforms and arbitrary workflows, and full-page vision is the lowest common denominator. Platform-specific approaches are a really exciting open space!
This is a good solution, instead of everyone blowing tokens on repeating the same computer use task, come up with a way to share the workflows. I think you'd need to make sure there aren't workflows shared that extract user information (passwords).
Interesting! I started something - nowhere near as complete as that and quite different but again using accessibility UI elements. The BIG problem I've found is SOOOO much stuff does really poorly having these elements exposed. Here was my approach https://github.com/willwade/app-automate?tab=readme-ov-file#... - What I do here is build UI templates - either using UIAccess OR using a one pass using a vision model.
Now the argument against this on [reddit](https://www.reddit.com/r/openclaw/comments/1s1dzxq/comment/o...)
"my experience is the opposite actually. UIA looks uniform on paper but WPF, WinForms, and Win32 all expose different control patterns and you end up writing per-toolkit handlers anyway. Qt only exposes anything if QAccessible was compiled in and the accessibility plugin is loaded at runtime, which on shipped binaries is basically never. Electron is just as opaque on Windows as on macOS because it's the same chromium underneath drawing into a canvas. the real split isn't OS vs OS, it's native toolkit vs everything else."
Isn't that basically what browser base does. I've found the hardest part of browser use to be stealth first then client change management then browser comprehension (which gets better with every new model).
Does https://github.com/webmachinelearning/webmcp overlap ?
If agents is what it finally takes to get good a11y I'll take it. I'll bitch about it, but I'll take it.