Two things are holding back current LLM-style AI of being of value here:
* Latency. LLM responses are measured in order of 1000s of milliseconds, where this project targets 10s of milliseconds, that's off by almost two orders of magnitute.
* Determinism. LLMs are inherently non-deterministic. Even with temperature=0, slight variations of the input lead to major changes in output. You really don't want your DB to be non-deterministic, ever.
> 1000s of milliseconds
Better known as "seconds"...
The suggestion was not to use an LLM to compile the expression, but to use an LLM to build the compiler.
> LLMs are inherently non-deterministic.
This isn't true, and certainly not inherently so.
Changes to input leading to changes in output does not violate determinism.