How Fair is Software Fairness Testing?
arXiv:2603.12511v1 Announce Type: new Abstract: Software fairness testing is a central method for evaluating AI systems, yet the meaning of fairness is often treated as fixed and universally applicable. This vision paper positions fairness testing as culturally situated and examines the problem across three dimensions. First, fairness metrics encode particular cultural values while marginalizing others. Second, test datasets are predominantly designed from Western contexts, excluding knowledge systems grounded in oral traditions, Indigenous languages, and non-digital communities. Third, fairness […]