logoalt Hacker News

woeiruayesterday at 1:26 PM1 replyview on HN

It's an interesting paper, but I'd like to see a lot more about the types of errors that the LLM makes. Are they happening in the forward pass or the inverse pass? My guess is the inverse pass.


Replies

daveguyyesterday at 7:14 PM

This sounds like wishful thinking to me.

The tasks are designed to be reversible. Whether it stochastic parrots in the forward direction or reverse direction is irrelevant. Especially considering these are inference engines. Every pass is a forward pass from the perspective of the LLM / agent. There is no feedback loop, and part of the reason why it's so easy for these things to mangle tasks. They are plausible sounding sentence/sequence generators.