logoalt Hacker News

Show HN: Magnitude – Open-source AI browser automation framework

110 pointsby anerliyesterday at 6:30 PM38 commentsview on HN

Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (https://news.ycombinator.com/item?id=43796003).

We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.

You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.

Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:

- Drag and drop interactions

- Data visualizations, charts, and tables

- Legacy apps with nested iframes

- Canvas and webGL-heavy sites (like design tools or photo editing)

- Remote desktops streamed into the browser

To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.

Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.

```ts

// Magnitude can handle high-level tasks

await agent.act('Create an issue', {

  // Optionally pass data that the agent will use where appropriate

  data: {

    title: 'Use Magnitude',

    description: 'Run "npx create-magnitude-app" and follow the instructions',

  },
});

// It can also handle low-level actions

await agent.act('Drag "Use Magnitude" to the top of the in progress column');

// Intelligently extract data based on the DOM content matching a provided zod schema

const tasks = await agent.extract(

    'List in progress issues',

    z.array(z.object({

        title: z.string(),

        description: z.string(),

        // Agent can extract existing data or new insights

        difficulty: z.number().describe('Rate the difficulty between 1-5')

    })),
);

```

We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!

Repo: https://github.com/magnitudedev/magnitude


Comments

rozapyesterday at 10:21 PM

There are a number of these out there, and this one has a super easy setup and appears to Just Work, so nice job on that. I had it going and producing plausible results within a minute or so.

One thing I'm wondering is if there's anyone doing this at scale? The issue I see is that with complex workflows which take several dozen steps and have complex control flow, the probability of reaching the end falls off pretty hard, because if each step has a .95 chance of completing successfully, after not very many steps you have a pretty small overall probability of success. These use cases are high value because writing a traditional scraper is a huge pain, but we just don't seem to be there yet.

The other side of the coin is simple workflows, but those tend to be the workflows where writing a scraper is pretty trivial. This did work, and I told it to search for a product at a local store, but the program cost $1.05 to run. So doing it at any scale quickly becomes a little bit silly.

So I guess my question is: who is having luck using these tools, and what are you using them for?

One route I had some success with is writing a DSL for scraping and then having the llm generate that code, then interpreting it and editing it when it gets stuck. But then there's the "getting stuck detection" part which is hard etc etc.

show 1 reply
mertunsalltoday at 7:30 AM

In browser-use, we combine vision + browser extraction and we find that this gives the most reliable agent: https://github.com/browser-use/browser-use :)

We recently gave the model access to a file system so that it never forgets what it's supposed to do - we already have ton of users very happy with recent reliability updates!

We also have a beta workflow-use, which is basically what's mentioned in the comments here to "cache" a workflow: https://github.com/browser-use/workflow-use

Let us know what you guys think - we are shipping hard and fast!

dataviz1000today at 2:10 AM

Hey guys, I got a question.

I've been working on a Chrome extension with a side panel. Think about it like the side panel copilot in VSCode, Cursor, or Windsurf. Currently it is automating workflows but those are hard coded. I've started working on a more generalized automation using langchain. Looking at your code is helpful because I can in only a few hundred lines of code recreate a huge portion Playwright's capabilities in a Chrome extension side panel so I should be able to port it to the Chrome extension. That is, I'm creating a tools like mouse click, type, mouse move, open tab, navigate, wait for element, ect..

Looking at your code, I'm thinking about pulling anything that isn't coupled to node while mapping all the Playwright capabilities to the equivalent in a Chrome extension. It's busy work.

If I do that why would I prefer using .baml over the equivalent langchain? What's the differnce? Am I'm comparing apples to oranges? I'm not worried about using langgraph because I should be able to get most of the functionality with xstate v5 [0] plus serialized portable JSON state graphs so I can store custom graphs on a remote server that can be queried by API.

That is my question. I don't see langchain in the dependencies which is cool, but why .baml? Also, what am I'm missing going down this thought path?

[0] https://chatgpt.com/share/685dfc60-106c-8004-bbd0-1ba3a33aba...

show 1 reply
grbshyesterday at 7:08 PM

Why not just use Claude by itself? Opus and Sonnet are great at producing pixel coordinates and tool usages from screenshots of UIs. Curious as to what your framework gives me over the plain base model.

show 1 reply
ewiredtoday at 12:16 AM

It was interesting to find out that Qwen 2.5 VL can output coordinates like Sonnet 4, or does that use a different implementation?

show 1 reply
axleeyesterday at 10:11 PM

Using this for testing instead of regular playwright must 10000x the cost and speed, doesn't it? At which points do the benefits outweigh the costs?

show 1 reply
sylwaretoday at 10:03 AM

Wow, I guess this could be significant for the humans of click/view/account creation farms.

mountainrivertoday at 2:11 AM

How many of these are there now?

show 1 reply
10yearsalurkertoday at 7:56 AM

Pop Pop! (Sorry, I just couldn’t resist)

show 1 reply
KeysToHeavenyesterday at 7:15 PM

Finally, a browser agent that doesn’t panic at the sight of a canvas

show 1 reply
Abubaker761yesterday at 7:38 PM

[dead]