Mean Testing under Truncation beyond Gaussian
arXiv:2605.01335v1 Announce Type: new
Abstract: We characterize the fundamental limits of high-dimensional mean testing under arbitrary truncation, where samples are drawn from the conditional distribution $P(cdot mid S)$ for an unknown truncation set $S$ that may hide up to an $varepsilon$-fraction of the probability mass. For distributions with $p$-th directional moments of magnitude at most $nu_{P,p}$, truncation induces a bias of order $O(nu_{P,p}varepsilon^{1-1/p})$. This bias creates a sharp information-theoretic detectability floor: when the signal $alpha$ falls below this threshold, the null and alternative hypotheses are indistinguishable even with infinite data. Above this floor, we prove that a simple second-order test achieving near-optimal sample complexity $n = O!left(frac{|Sigma_P|}{(alpha-4nu_{P,p}varepsilon^{1-1/p})^2}sqrt{d}right)$. We further identify a structural escape from this finite-moment bias barrier. Under a directional median regularity assumption, truncation bias improves to linear order $O(varepsilon)$. This reveals an intermediate regime in which estimation requires $Theta(d)$ samples for uniform recovery, while testing recovers the classical $Theta(sqrt d)$ rate once truncation bias is eliminated. Together, our results provide a unified framework for mean testing under truncation, connecting finite-moment, sub-Gaussian, and median-regular structural regimes.