Standard Transformers Achieve the Minimax Rate in Nonparametric Regression with $C^{s,lambda}$ Targets
arXiv:2602.20555v1 Announce Type: new
Abstract: The tremendous success of Transformer models in fields such as large language models and computer vision necessitates a rigorous theoretical investigation. To the best of our knowledge, this paper is the first work proving that standard Transformers can approximate H”older functions $ C^{s,lambda}left([0,1]^{dtimes n}right) $$ (sinmathbb{N}_{geq0},0<lambdaleq1) $ under the $L^t$ distance ($t in [1, infty]$) with arbitrary precision. Building upon this approximation result, we demonstrate that standard Transformers achieve the minimax optimal rate in nonparametric regression for H”older target functions. It is worth mentioning that, by introducing two metrics: the size tuple and the dimension vector, we provide a fine-grained characterization of Transformer structures, which facilitates future research on the generalization and optimization errors of Transformers with different structures. As intermediate results, we also derive the upper bounds for the Lipschitz constant of standard Transformers and their memorization capacity, which may be of independent interest. These findings provide theoretical justification for the powerful capabilities of Transformer models.