Yes - some degree of reasoning appears to be latent in the structure of language itself. But models trained explicitly on reasoning-focused data still perform better than models trained only on general corpora.*
*At least up to 300B parameters, based on the models we’ve tested.
I wonder what the relationships between the grammar of a language, what it can compute, how it encodes, and what the minimal parameters/structure for reasoning looks like...