Every major leap forward triggers Luddism in those prone to histrionics.
You have to offload cognition in order to recognize the next abstraction. That's always been how we tackle harder problems.
A good explanation is foreplay, not a replacement for the act itself. If people stop there, that's a premature-pedagogy problem, not an AI problem.
Somewhere, an AI is summarizing this comment for someone right now, and that person understands the issue better than you do.
This is not just another abstraction. It is something fundamentally different because it is a jump away from deterministic, transparent processes to a probabilistic black box. It's not like a jump from orality to books to digital media, or hand written arithmetic to calculators to programs. These abstractions were solid and dependable and could be relied upon to tackle harder problems. This abstraction is beyond leaky.
The assumption that "that person understands the issue better than you" is bold when the best AI summaries will often give back completely false summaries on any given issue.