Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
arXiv:2507.05306v3 Announce Type: replace
Abstract: We consider the multinomial logistic bandit problem in which a learner interacts with an environment by selecting actions to maximize expected rewards based on probabilistic feedback from multiple possible outcomes. In the binary setting, recent work has focused on understanding the impact of the non-linearity of the logistic model (Faury et al., 2020; Abeille et al., 2021). They introduced a problem-dependent constant $kappa_* geq 1$ that may be exponentially large in some problem parameters and which is captured by the derivative of the sigmoid function. It encapsulates the non-linearity and improves existing regret guarantees over $T$ rounds from $smash{O(dsqrt{T})}$ to $smash{O(dsqrt{T/kappa_*})}$, where $d$ is the dimension of the parameter space. We extend their analysis to the multinomial logistic bandit framework with a finite action space, making it suitable for complex applications with more than two choices, such as reinforcement learning or recommender systems. To achieve this, we extend the definition of $ kappa_* $ to the multinomial setting and propose an efficient algorithm that leverages the problem’s non-linearity. Our method yields a problem-dependent regret bound of order $ smash{widetilde{mathcal{O}}( R d sqrt{ {KT}/{kappa_*}} ) } $, where $R$ denotes the norm of the vector of rewards and $K$ is the number of outcomes. This improves upon the best existing guarantees of order $ smash{widetilde{mathcal{O}}( RdK sqrt{T} )}$. Moreover, we provide a matching $smash{ Omega(dRsqrt{KT/kappa_*})}$ lower-bound, showing that our algorithm is minimax-optimal and that our definition of $kappa_*$ is optimal.