I believe the author thinks of this problem in terms of “the LLM will figure it out”, i.e. it will be trained on enough code that compiles, that the LLM just needs to put the functional blocks together.
Which might work to a degree with languages like JavaScript.
That point makes no sense.
If the LLM is not perfect at scale - extraordinarily unlikely that it would be - then it becomes relevant to understand the actual language.
That's either natural language that's supposed to somehow be debuggable - or it's a language like Rust - which actually is.