logoalt Hacker News

AxiomLabyesterday at 9:25 AM2 repliesview on HN

Imposing a strict, discrete topology—like a tree or a DAG—is the only viable way to build reliable systems on top of LLMs.

If you leave agent interaction unconstrained, the probabilistic variance compounds into chaos. By encapsulating non-deterministic nodes within a rigidly defined graph structure, you regain control over the state machine. Coordination requires deterministic boundaries.


Replies

bizzletkyesterday at 11:51 AM

The article addresses this:

> This made sense when agents were unreliable. You’d never let GPT-3 decide how to decompose a project. But current models are good at planning. They break problems into subproblems naturally. They understand dependencies. They know when a task is too big for one pass.

> So why are we still hardcoding the decomposition?

show 2 replies
4b11b4yesterday at 4:26 PM

My gut says you are correct though cycles can be permitted given a boundary condition.

show 1 reply