Near-Optimal Regret for KL-Regularized Multi-Armed Bandits
arXiv:2603.02155v1 Announce Type: cross
Abstract: Recent studies have shown that reinforcement learning with KL-regularized objectives can enjoy faster rates of convergence or logarithmic regret, in contrast to the classical $sqrt{T}$-type regret in the unregularized setting. However, the statistical efficiency of online learning with respect to KL-regularized objectives remains far from completely characterized, even when specialized to multi-armed bandits (MABs). We address this problem for MABs via a sharp analysis of KL-UCB using a novel peeling argument, which yields a $tilde{O}(eta Klog^2T)$ upper bound: the first high-probability regret bound with linear dependence on $K$. Here, $T$ is the time horizon, $K$ is the number of arms, $eta^{-1}$ is the regularization intensity, and $tilde{O}$ hides all logarithmic factors except those involving $log T$. The near-tightness of our analysis is certified by the first non-constant lower bound $Omega(eta K log T)$, which follows from subtle hard-instance constructions and a tailored decomposition of the Bayes prior. Moreover, in the low-regularization regime (i.e., large $eta$), we show that the KL-regularized regret for MABs is $eta$-independent and scales as $tilde{Theta}(sqrt{KT})$. Overall, our results provide a thorough understanding of KL-regularized MABs across all regimes of $eta$ and yield nearly optimal bounds in terms of $K$, $eta$, and $T$.