Dimension-Independent Convergence of Underdamped Langevin Monte Carlo in KL Divergence

arXiv:2603.02429v1 Announce Type: cross
Abstract: Underdamped Langevin dynamics (ULD) is a widely-used sampler for Gibbs distributions $pipropto e^{-V}$, and is often empirically effective in high dimensions. However, existing non-asymptotic convergence guarantees for discretized ULD typically scale polynomially with the ambient dimension $d$, leading to vacuous bounds when $d$ is large. The main known dimension-free result concerns the randomized midpoint discretization in Wasserstein-2 distance (Liu et al.,2023), while dimension-independent guarantees for ULD discretizations in KL divergence have remained open. We close this gap by proving the first dimension-free KL divergence bounds for discretized ULD. Our analysis refines the KL local error framework (Altschuler et al., 2025) to a dimension-free setting and yields bounds that depend on $mathrm{tr}(mathbf{H})$, where $mathbf{H}$ upper bounds the Hessian of $V$, rather than on $d$. As a consequence, we obtain improved iteration complexity for underdamped Langevin Monte Carlo relative to overdamped Langevin methods in regimes where $mathrm{tr}(mathbf{H})ll d$.

Liked Liked