I agree with you, but many people have yet to understand that content they disagree with will continue to exist, no matter what, and central gatekeepers are not helpful in eliminating that content.
The fucking “nazi bar” analogy has ruined an entire generation. You would think after centuries of trying to stamp out competing ideas, humans would finally come to terms with the fact that it cannot be done.
Small curated groups are the only way to enforce ideological orthodoxy. You cannot force it on the public, nor can you punish the public for holding bad ideas without creating blowback and resistance.
I don't think we have to argue against the "nazi bar" analogy, though. In that analogy, nazis are allowed to exist in the world, just not in the bar. The difference is how we implement the concept of "in". The same analogy works if you are out on the street: everyone is allowed to be there, but that doesn't give nazis the right to your attention.
Until we have a real way to meaningfully process natural language (I have a serious idea for that, but that's another conversation), we won't be able to automate content filtration. The next best thing is ironically similar to what we came here to complain about: attestations in a web of trust. If everything we bother to read is tied to a user identity (which can be anonymous), we can filter out content from any user identity that is generally agreed to be unwelcome. The traditional work of moderation can be replaced by collaborative categorization of both content and publishers. Any identity whose published content is too burdensome to categorize can simply be filtered out completely. The core difference is that there are no "special" users: anyone can make, edit, and publish a filter list. Authority itself is replaced by every participant's choice of filter. Moderated spaces are replaced by the most popular intersection of lists. Identity is verified by the attestation of other identities, based on their experience participating with you.