logoalt Hacker News

kvdveeryesterday at 7:56 AM3 repliesview on HN

Two things are holding back current LLM-style AI of being of value here:

* Latency. LLM responses are measured in order of 1000s of milliseconds, where this project targets 10s of milliseconds, that's off by almost two orders of magnitute.

* Determinism. LLMs are inherently non-deterministic. Even with temperature=0, slight variations of the input lead to major changes in output. You really don't want your DB to be non-deterministic, ever.


Replies

qeternityyesterday at 9:49 AM

> LLMs are inherently non-deterministic.

This isn't true, and certainly not inherently so.

Changes to input leading to changes in output does not violate determinism.

show 2 replies
simonaskyesterday at 8:00 AM

> 1000s of milliseconds

Better known as "seconds"...

olauyesterday at 8:09 AM

The suggestion was not to use an LLM to compile the expression, but to use an LLM to build the compiler.