I don't think we have to argue against the "nazi bar" analogy, though. In that analogy, nazis are allowed to exist in the world, just not in the bar. The difference is how we implement the concept of "in". The same analogy works if you are out on the street: everyone is allowed to be there, but that doesn't give nazis the right to your attention.
Until we have a real way to meaningfully process natural language (I have a serious idea for that, but that's another conversation), we won't be able to automate content filtration. The next best thing is ironically similar to what we came here to complain about: attestations in a web of trust. If everything we bother to read is tied to a user identity (which can be anonymous), we can filter out content from any user identity that is generally agreed to be unwelcome. The traditional work of moderation can be replaced by collaborative categorization of both content and publishers. Any identity whose published content is too burdensome to categorize can simply be filtered out completely. The core difference is that there are no "special" users: anyone can make, edit, and publish a filter list. Authority itself is replaced by every participant's choice of filter. Moderated spaces are replaced by the most popular intersection of lists. Identity is verified by the attestation of other identities, based on their experience participating with you.
I think we agree, the problem is people defining global platforms as “the bar”. We overemphasize the importance of global reach; it is important, but not everything needs to be global, least of all personal communication between small groups of friends. I don’t really want everyone herded into these public platforms where central authorities can determine who is blessed with the ability to speak to other people. I also don’t want people with political grievances to be cut off from places where they can air those grievances publicly, as this leads to bad outcomes. We need both kinds of spaces.
The web of trust idea is good, I have thought about it before as well, and I think there’s a couple of people who tried building a platform around it (I don’t think they got very far into the process though). I should be able to filter based on trusted people with similar taste. I shouldn’t have to accept a central authority’s notion of what is acceptable, excepting content that violates US law. That’s all I care about in terms of moderation.