This is magical thinking.
LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.
I don't care if the machine has a soul, I only care what the machine can produce. With good prompting, the machine produces more ""thoughtful"" results. As an engineer, that's all I care about.
It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.
You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)
Tell Donald Knuth that: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...