Sample Complexity of Average-Reward Q-Learning: From Single-agent to Federated Reinforcement Learning

arXiv:2601.13642v1 Announce Type: new
Abstract: Average-reward reinforcement learning offers a principled framework for long-term decision-making by maximizing the mean reward per time step. Although Q-learning is a widely used model-free algorithm with established sample complexity in discounted and finite-horizon Markov decision processes (MDPs), its theoretical guarantees for average-reward settings remain limited. This work studies a simple but effective Q-learning algorithm for average-reward MDPs with finite state and action spaces under the weakly communicating assumption, covering both single-agent and federated scenarios. For the single-agent case, we show that Q-learning with carefully chosen parameters achieves sample complexity $widetilde{O}left(frac{|mathcal{S}||mathcal{A}||h^{star}|_{mathsf{sp}}^3}{varepsilon^3}right)$, where $|h^{star}|_{mathsf{sp}}$ is the span norm of the bias function, improving previous results by at least a factor of $frac{|h^{star}|_{mathsf{sp}}^2}{varepsilon^2}$. In the federated setting with $M$ agents, we prove that collaboration reduces the per-agent sample complexity to $widetilde{O}left(frac{|mathcal{S}||mathcal{A}||h^{star}|_{mathsf{sp}}^3}{Mvarepsilon^3}right)$, with only $widetilde{O}left(frac{|h^{star}|_{mathsf{sp}}}{varepsilon}right)$ communication rounds required. These results establish the first federated Q-learning algorithm for average-reward MDPs, with provable efficiency in both sample and communication complexity.

Liked Liked