logoalt Hacker News

skybriantoday at 4:19 AM0 repliesview on HN

You're dismissing LLM-generated text as the "merest resemblance of thinking" when the way it resembles thinking is becoming increasingly useful.

When I prompt a coding agent to fix a bug, it outputs text describing a hypothesis and more text that results in running shell commands to test the hypothesis. If the output shows that it guessed wrong, it outputs more text to test a different hypothesis, and more text to edit code, and in the end, the bug is fixed.

The text resembles the output of a reasoning process closely enough to actually work. Maybe, for some purposes, it doesn't matter if it's "real" or not?

What does "real" reasoning do for us that the imitation doesn't do? Does it come up with better hypotheses? Is it better at testing them? Sometimes, but not always. Human reasoning is more expensive, less available, and sometimes gets poor results.