I do see your concern, even share in it, it's just not really tractable at its core.
There's a lot of chatter lately e.g. about using TLA+ for formal modeling, so that anything downstream can be formally proven. That helps, but then the formal model still needs to be crafted somehow, which means a pass of semantic interpretation.
Going from binary to spec mechanistically via formal proofs would be possible, but only if there was a formal spec for the binary structure and the ISA available. In practice, both are just natural language prose too however, meaning another interpretation pass or two. The ISA specs also keep a lot implementation-defined / undefined afaik, for microarchitecture-level optimization freedom.
Netlists, PDK, and the likes then might be public for some RISC-V designs these days, but to get the actual chip behavior requires EM simulation typically on a scale that is not possible for any chip performant enough to be of interest. And RISC-V is not a very broadly adopted platform for proprietary consumer software.
Having the human do the semantic mapping is expensive and legally stricken. Having an LLM do it is more risk, but way, way, cheaper and currently legally grey. And both can and do make mistakes.
This is why I see this so bleakly. That said, I do also think formats like this are delicate enough that even rudimentary empirical testing should provide a surprisingly decent behavioral coverage. There's a reason that "I can't believe anything ever works at all" is such a common sentiment. Practical usage is a surprisingly powerful gate, and fuzzing in particular is basically that on steroids.
I do nevertheless still secretly get the heebie-jeebies from the Linux NTFS implementation though (me bringing that up was no coincidence).