This is just sleight of hand.
In this model the spec/scenarios are the code. These are curated and managed by humans just like code.
They say "non interactive". But of course their work is interactive. AI agents take a few minutes-hours whereas you can see code change result in seconds. That doesn't mean AI agents aren't interactive.
I'm very AI-positive, and what they're doing is different, but they are basically just lying. It's a new word for a new instance of the same old type of thing. It's not a new type of thing.
The common anti-AI trope is "AI just looked at <human output> to do this." The common AI trope from the StrongDM is "look, the agent is working without human input." Both of these takes are fundamentally flawed.
AI will always depend on humans to produce relevant results for humans. It's not a flaw of AI, it's more of a flaw of humans. Consequently, "AI needs human input to produce results we want to see" should not detract from the intelligence of AI.
Why is this true? At a certain point you just have Kolmogorov complexity, AI having fixed memory and fixed prompt size, pigeonhole principle, not every output is possible to be produced even with any input given specific model weights.
Recursive self-improvement doesn't get around this problem. Where does it get the data for next iteration? From interactions with humans.
With the infinite complexity of mathematics, for instance solving Busy Beaver numbers, this is a proof that AI can in fact not solve every problem. Humans seem to be limited in this regard as well, but there is no proof that humans are fundamentally limited this way like AI. This lack of proof of the limitations of humans is the precise advantage in intelligence that humans will always have over AI.