logoalt Hacker News

16bitvoidyesterday at 10:17 AM1 replyview on HN

It's right there in the README.

> Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code.

> Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.


Replies

vghaisasyesterday at 3:11 PM

Oh I did read the README, but still have the question: while it does save on cost, latency and complexity, the tradeoff is that the agents can't run whatever they want in a sandbox, which would make them less capable too.