logoalt Hacker News

simianwordstoday at 9:22 AM4 repliesview on HN

i don't get what the point of what you are saying is? i can ask it to explain how to solve an integral right now with steps.

i can ask it to tell me how to write like a person X right now.


Replies

blargeytoday at 8:34 PM

“i can ask it to give a text description of a linear logical math process that has been described in text countless times”

If you think “the tacit knowledge and conscious/subconscious reasoning mix that caused X to write like X” can be meaningfully captured by some 1-page “style guide” like llmtropes, I’m not sure what to tell you. Such a style description would be informed by a soup of reviewers that most certainly cannot write like X even with their stronger and more nuanced observations than what the LLM picked up.

RobRiveratoday at 4:35 PM

Actually this is the crux and the nuance which makes discussing LLM specifics a pain in the general space.

If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Instead what you will receive is a text that follows a statistically derived most likely (in accordance to the perplexity tuning) response to such a question.

show 2 replies
Peritracttoday at 9:41 AM

"Explain how to solve" and "write like X" are crucially different tasks. One of them is about going through the steps of a process, and the other is about mimicking the result of a process.

show 2 replies
mysterydiptoday at 10:50 AM

Is the reason it can show steps for solving an integral because the training set contained webpages or books showing how to do it?

show 1 reply