I read it more as:
We already don't know how everything works, AI is steering us towards a destination where there is more of the everything.
I would also add it's also possible it will reduce the number people that are _capable_ of understanding the parts it is responsible for.
Who's "we"?
I am sure engineers collectively understand how the entire stack works.
With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct"