Note that the parameters to SHAP can be things other than the model parameters (e.g. model inputs), it's very not obvious what those should be. Indeed that's often the central problem for interpretability (what are my actual features) and SHAP is entirely silent on what those features should be. SHAP could work as a final step if you have a small feature set. But I doubt that LLMs will have a small set of features for any reasonable interpretation of what they do.