Don’t Forget Its Variance! The Minimum Path Variance Principle for Accurate and Stable Score-Based Models
arXiv:2602.00834v2 Announce Type: replace-cross
Abstract: Score-based methods are powerful across machine learning, but they face a paradox: theoretically path-independent, yet practically path-dependent.
We resolve this by proving that practical training objectives differ from the ideal, ground-truth objective by a crucial, overlooked term: the path variance of the score function.
We propose the MinPV (**Min**imum **P**ath **V**ariance) Principle to minimize this path variance.
Our key contribution is deriving a closed-form expression for the variance, making optimization tractable.
By parameterizing the path with a flexible Kumaraswamy Mixture Model, our method learns data-adaptive, low-variance paths without heuristic manual selection.
This principled optimization of the complete objective yields more accurate and stable estimators, establishing new state-of-the-art results on challenging benchmarks and providing a general framework for optimizing score-based interpolation.