You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.
This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language.
Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.
This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language.
Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.