logoalt Hacker News

grey-areayesterday at 9:10 AM1 replyview on HN

I wonder if th style shift has anything to do with training for conversation (ie. tuning models to respond well in a chat situation)?


Replies

capnrefsmmatyesterday at 1:23 PM

Probably. One common feature of LLM output is grammatical features that indicate information density, like nominalizations, longer words, participial clauses, and so on. Perhaps training tasks that involve asking the LLMs for concise explanations or summaries encourage the use of these features to give denser answers.