“Faithful to What?” On the Limits of Fidelity-Based Explanations
arXiv:2506.12176v4 Announce Type: replace-cross
Abstract: In explainable AI, surrogate models are commonly evaluated by their fidelity to a neural network’s predictions. Fidelity, however, measures alignment to a learned model rather than alignment to the data-generating signal underlying the task. This work introduces the linearity score $lambda(f)$, a diagnostic that quantifies the extent to which a regression network’s input–output behavior is linearly decodable. $lambda(f)$ is defined as an $R^2$ measure of surrogate fit to the network. Across synthetic and real-world regression datasets, we find that surrogates can achieve high fidelity to a neural network while failing to recover the predictive gains that distinguish the network from simpler models. In several cases, high-fidelity surrogates underperform even linear baselines trained directly on the data. These results demonstrate that explaining a model’s behavior is not equivalent to explaining the task-relevant structure of the data, highlighting a limitation of fidelity-based explanations when used to reason about predictive performance.