This is very interesting. I don't see much discussion of interpretability in day to the day discourse of AI builders. I wonder if everyone assumes it to either be solved, or to be too out of reach to bother stopping and thinking about.
Most interpretability techniques haven't yet to be shown to be useful for everyday model pipelines. However, the field is working hard to change this.
Most interpretability techniques haven't yet to be shown to be useful for everyday model pipelines. However, the field is working hard to change this.