No, tokenization is not the only reason. A next-word predictor has fundamentally a hard time executing algorithms, even as simple as counting.
Counting is one of the algorithms that can be expressed by a RASP program, which transformers closely approximate.
Counting is one of the algorithms that can be expressed by a RASP program, which transformers closely approximate.