I've been intrigued about the thing they call the "data product" for a long time: the fact that there's a sort of equivalence between precomputing position analysis and doing no runtime analysis vs knowing nothing and doing everything at runtime. It is a general property of a lot of algorithms that you can precompute various amounts of computation in order to reduce runtime complexity. It's also the difference between "telling an LLM to do a task" and "telling an LLM to write a bunch of code to do a task", which are also the two ends of a spectrum.
Especially interesting is the fact that the optimal strategy, where on the spectrum you go, is affected by the effectiveness of your algorithms on each side. The more efficiently you can cognitively compress precomputed decisions, the more it makes sense to precompute; the more efficiently you can apply techniques for move selection at runtime. And the two interact a lot: for instance in chess, there are simple heuristics for a lot of endgames, meaning that the state space you explore at runtime can terminate as soon as it gets to an endgame that you have memorized a solution for.
I wonder if anyone knows anyone examining this phenomenon in generality?
isn't this very similar or a case of trading space for time or vice versa (e.g in algorithm analysis)