Bandit Allocational Instability

arXiv:2602.07472v1 Announce Type: cross
Abstract: When multi-armed bandit (MAB) algorithms allocate pulls among competing arms, the resulting allocation can exhibit huge variation. This is particularly harmful in modern applications such as learning-enhanced platform operations and post-bandit statistical inference. Thus motivated, we introduce a new performance metric of MAB algorithms termed allocation variability, which is the largest (over arms) standard deviation of an arm’s number of pulls. We establish a fundamental trade-off between allocation variability and regret, the canonical performance metric of reward maximization. In particular, for any algorithm, the worst-case regret $R_T$ and worst-case allocation variability $S_T$ must satisfy $R_T cdot S_T=Omega(T^{frac{3}{2}})$ as $Trightarrowinfty$, as long as $R_T=o(T)$. This indicates that any minimax regret-optimal algorithm must incur worst-case allocation variability $Theta(T)$, the largest possible scale; while any algorithm with sublinear worst-case regret must necessarily incur ${S}_T= omega(sqrt{T})$. We further show that this lower bound is essentially tight, and that any point on the Pareto frontier $R_T cdot S_T=tilde{Theta}(T^{3/2})$ can be achieved by a simple tunable algorithm UCB-f, a generalization of the classic UCB1. Finally, we discuss implications for platform operations and for statistical inference, when bandit algorithms are used. As a byproduct of our result, we resolve an open question of Praharaj and Khamaru (2025).

Liked Liked