I am so perplexed what exactly where people thinking they were. Its nothing else than highly sofisticated statistics.
Do you know of any other statistical model that can "hallucinate". They clearly have emergent capabilities that come from scale that are absent in any other statistical model we've ever dreamt up.
We know that LLMs build complex internal representations of language, logic, and concepts rather than just shallow word-counting.
If you deny that then you probably have an elementary understanding of how they work. Not even Chomsky denies that. The real argument imo is whether those internal representations constitute an actual "understanding" of the world or just flatten out to something much less interesting.
From that perspective, which is totally correct, it makes you wonder what other domains of knowledge look like when pushed to the boundaries of our capabilities as a species.