If we're able to produce an LLM which takes a seed and produces the same output per input, then we'd be able to do this
There must be good reasons why we don’t have this. I suspect one reason is that the SOTA providers are constantly changing the harness around the core model, so you’d need to version that harness as well.
There must be good reasons why we don’t have this. I suspect one reason is that the SOTA providers are constantly changing the harness around the core model, so you’d need to version that harness as well.