Even if you abliterate your model using the old abliteration script or the newer heretic, I found that the models still feel somewhat censored as they purposefully avoid using specific styles and vocabulary, as if Deepmind/Qwen et al have entirely stripped or replaced "bad" words or texts from their corpus of training data.
A related blog post (https://news.ycombinator.com/item?id=47842021) discussed this and termed it "flinching". I wonder if this flinching could also be "mediated by a single direction" or if it can only be fixed by finetuning on a more extensive text corpus.
That's likely not a trained behavior, though, it's probably the result of filtering the training data. It's not "when these parameters fire, trigger a refusal", it's the absence of parameters triggering the flinched words in the first place.