logoalt Hacker News

drbigtoday at 8:41 AM5 repliesview on HN

The most interesting is the realization that if the LLM's input is only the output of a professional (human), then by definition the LLM cannot mimic the process the (human) professional applied to get from whatever input they had to produce the output.

In other words an LLM can spit out a plausible "output of X", however it cannot encode the process that lead X to transform their inputs into their output.


Replies

BrtBytetoday at 5:33 PM

LLMs obviously aren't reproducing the internal cognitive process, but they might still capture some of the structural patterns that emerge from it

show 1 reply
simianwordstoday at 9:22 AM

i don't get what the point of what you are saying is? i can ask it to explain how to solve an integral right now with steps.

i can ask it to tell me how to write like a person X right now.

show 2 replies
Eddy_Viscosity2today at 8:48 AM

Is it not possible for the process of input to output be inferred by the llm and therefore applied to new inputs to create appropriate outputs.

show 1 reply
treetalkertoday at 1:02 PM

You've pinpointed the connection that people fail to make when they seek legal advice (or even information) from LLMs.

show 1 reply
weird-eye-issuetoday at 8:51 AM

Replace "LLM" with "student" and read that again. You don't just blindly give students output, you teach them, like what you are supposed to do with an LLM.

show 3 replies