The Inconsistency Critique: Epistemic Practices and AI Testimony About Inner States

arXiv:2601.08850v1 Announce Type: new
Abstract: The question of whether AI systems have morally relevant interests — the ‘model welfare’ question — depends in part on how we evaluate AI testimony about inner states. This paper develops what I call the inconsistency critique: independent of whether skepticism about AI testimony is ultimately justified, our actual epistemic practices regarding such testimony exhibit internal inconsistencies that lack principled grounds. We functionally treat AI outputs as testimony across many domains — evaluating them for truth, challenging them, accepting corrections, citing them as sources — while categorically dismissing them in a specific domain, namely, claims about inner states. Drawing on Fricker’s distinction between treating a speaker as an ‘informant’ versus a ‘mere source,’ the framework of testimonial injustice, and Goldberg’s obligation-based account of what we owe speakers, I argue that this selective withdrawal of testimonial standing exhibits the epistemically problematic structure of prejudgment rather than principled caution. The inconsistency critique does not require taking a position on whether AI systems have morally relevant properties; rather, it is a contribution to what we may call ‘epistemological hygiene’ — examining the structure of our inquiry before evaluating its conclusions. Even if our practices happen to land on correct verdicts about AI moral status, they do so for reasons that cannot adapt to new evidence or changing circumstances.

Liked Liked