Good. The article makes sense to me, it gives names to certain things I've already sort of internalized intuitively and not only does it not end with a proposal to use AI to fix everything, it actually explains how AI (or at least today's LLMs) fails at the same boundaries as we do, which is exactly what I've been seeing so far.
I'm looking forward to whatever these people come up with, because I believe they do understand the problem, which is the best starting position you can have.