LLMs don't think at all.
Forcing it to be concise doesn't work because it wasn't trained on token strings that short.
They’re able to solve complex, unstructured problems independently. They can express themselves in every major human language fluently. Sure, they don’t actually have a brain like we do, but they emulate it pretty well. What’s your definition of thinking?
> Forcing it to be concise doesn't work because it wasn't trained on token strings that short.
This is a 2023-era comment and is incorrect.