“What don’t you understand?” Language games and black box algorithms
arXiv:2603.25900v1 Announce Type: new
Abstract: The aim of this article is to understand the problem of “black box” algorithms, an issue inherent to the nascent field of Explainable Artificial Intelligence (XAI). While it is relatively easy to understand something someone explained to us, it becomes more complicated when no one can fully grasp the issue. Our purpose is however to highlight: (1) that we should speak of interpretability rather than explainability when we seek to understand models, mainly because we never have complete and unambiguous access to information; (2) that the machines face the problem of the inscrutability of reference, in the same way that the linguist imagined by Willard Van Orman Quine cannot precisely determine what the term “gavagai” refers to in a situation of radical translation; (3) that there is no rule for the application of language, except for “language games”, as Ludwig Wittgenstein’s linguistics teaches us. The hope of achieving complete explicability and transparency of algorithms is undoubtedly in vain: we can only rely on partial and broad interpretations that will never fully explain the underlying rules.