This is condescending and wrong at the same time (best combo).
LLMs do stumble into long prediction chains that don’t lead the inference in any useful direction, wasting tokens and compute.
Are you sure about that? Chain of thought does not need to be semantically useful to improve LLM performance. https://arxiv.org/abs/2404.15758
Are you sure about that? Chain of thought does not need to be semantically useful to improve LLM performance. https://arxiv.org/abs/2404.15758