Eventually LIL Regret: Almost Sure $lnln T$ Regret for a sub-Gaussian Mixture on Unbounded Data

arXiv:2512.12325v2 Announce Type: replace-cross
Abstract: We prove that a classic sub-Gaussian mixture proposed by Robbins in a stochastic setting actually satisfies a path-wise (deterministic) regret bound. For every path in a natural “Ville event” $E_alpha$, this regret till time $T$ is bounded by $ln^2(1/alpha)/V_T + ln (1/alpha) + ln ln V_T$ up to universal constants, where $V_T$ is a nonnegative, nondecreasing, cumulative variance process. (The bound reduces to $ln(1/alpha) + ln ln V_T$ if $V_T geq ln(1/alpha)$.) If the data were stochastic, then one can show that $E_alpha$ has probability at least $1-alpha$ under a wide class of distributions (eg: sub-Gaussian, symmetric, variance-bounded, etc.). In fact, we show that on the Ville event $E_0$ of probability one, the regret on every path in $E_0$ is eventually bounded by $ln ln V_T$ (up to constants). We explain how this work helps bridge the world of adversarial online learning (which usually deals with regret bounds for bounded data), with game-theoretic statistics (which can handle unbounded data, albeit using stochastic assumptions). In short, conditional regret bounds serve as a bridge between stochastic and adversarial betting.

Liked Liked