This feels like it would run against the “I bought my device, I should control how it behaves” line of thinking.
But it would be pretty well in line with the "I trust my contact with this communication, but only if they're not systematically misled to copy it to readily exploitable insecure storage" line of thinking.
Since the purposes of the program are pretty heavy on private communication, I'm inclined to think that takes precedence here, especially considering the consequences for dropping default message previews versus adding default reveal of supposedly private information.
True, though the device could simply not be connected to that chat if the user doesn't want to implement the policies necessary to access that chat.
The major hole here is that you turn off your notifications and don't have a bunch of database records, but the threat actor somehow finds out who your contacts are, gets a hold of their phone, and can then see all of the messages you sent via their notifications database. So if you want to trust the device for secure communications, you can't do that.
smartphones in general runs against the “I bought my device, I should control how it behaves” line of thinking
I think it fits in pretty well with Signal. As it stands, a group chat can control when a message is automatically deleted for everyone, so everyone can rely on that being a shared setting. That's an intentional design decision. There's no individual opt-out.
An individual can disable name or content in notifications in iOS, or set "mute messages" for a chat to prevent notifications from appearing for that specific chat, but there's nothing that gives group members any assurance that other group members are doing that.