Imposing a strict, discrete topology—like a tree or a DAG—is the only viable way to build reliable systems on top of LLMs.
If you leave agent interaction unconstrained, the probabilistic variance compounds into chaos. By encapsulating non-deterministic nodes within a rigidly defined graph structure, you regain control over the state machine. Coordination requires deterministic boundaries.
My gut says you are correct though cycles can be permitted given a boundary condition.
The article addresses this:
> This made sense when agents were unreliable. You’d never let GPT-3 decide how to decompose a project. But current models are good at planning. They break problems into subproblems naturally. They understand dependencies. They know when a task is too big for one pass.
> So why are we still hardcoding the decomposition?