Optimistic Training and Convergence of Q-Learning — Extended Version
arXiv:2602.06146v1 Announce Type: cross
Abstract: In recent work it is shown that Q-learning with linear function approximation is stable, in the sense of bounded parameter estimates, under the $(varepsilon,kappa)$-tamed Gibbs policy; $kappa$ is inverse temperature, and $varepsilon>0$ is introduced for additional exploration. Under these assumptions it also follows that there is a solution to the projected Bellman equation (PBE). Left open is uniqueness of the solution, and criteria for convergence outside of the standard tabular or linear MDP settings.
The present work extends these results to other variants of Q-learning, and clarifies prior work: a one dimensional example shows that under an oblivious policy for training there may be no solution to the PBE, or multiple solutions, and in each case the algorithm is not stable under oblivious training.
The main contribution is that far more structure is required for convergence. An example is presented for which the basis is ideal, in the sense that the true Q-function is in the span of the basis. However, there are two solutions to the PBE under the greedy policy, and hence also for the $(varepsilon,kappa)$-tamed Gibbs policy for all sufficiently small $varepsilon>0$ and $kappage 1$.