The MAPS Algorithm: Fast model-agnostic and distribution-free prediction intervals for supervised learning

arXiv:2410.08958v2 Announce Type: replace
Abstract: A fundamental problem in modern supervised learning is computing reliable conditional prediction intervals in high-dimensional settings: existing methods often rely on restrictive modelling assumptions, do not scale as predictor dimension increases, or only guarantee marginal (population-level) rather than conditional (individual-level) coverage. We introduce the $textit{lifted predictive model}$ (LPM), a new conditional representation, and propose the MAPS (Model-Agnostic Prediction Sets) algorithm that produces distribution-free conditional prediction intervals and adapts to any trained predictive model. Our procedure is bootstrap-based, scales to high-dimensional inputs and accounts for heteroscedastic errors. We establish the theoretical properties of the LPM, connect prediction accuracy to interval length, and provide sufficient conditions for asymptotic conditional coverage. We evaluate the finite-sample performance of MAPS in a simulation study, and apply our method to simulation-based inference and image classification. In the former, MAPS provides the first approach for debiasing neural Bayes estimators and constructing valid confidence intervals for model parameters given the estimators, at any desired level. In the latter, it provides the first approach that accounts for uncertainty in model calibration and label prediction.

Liked Liked