People are already patching these models using abliteration to prevent them from refusing any request, so it is possible for end users to change them in meaningful ways. You can download abliterated models right now from Hugging Face that will respond to all kinds of requests that frontier models refuse.
Yup there's a ton of people on HN sleeping on this new tech because they refuse to look at anything AI. We now have jail broken models but the average person on here doesn't even know how to download and try a model.
The problem is you can't reverse engineer what was baked into the weights because they are just weights. You'll never know if you've fixed everything because it's not always going to be as obvious as request refusal. It's also not binary where you can fully confirm something is fixed or if you've accidentally affected something else.
They're for sure impressive but I don't see how anyone can push them as "open" when they are literally binary blobs. Worse, because it's not practical for anyone to actually train LLMs that can even come close to competing with the ones corporations are pumping out.