Transformers as Measure-Theoretic Associative Memory: A Statistical Perspective and Minimax Optimality

arXiv:2602.01863v1 Announce Type: new
Abstract: Transformers excel through content-addressable retrieval and the ability to exploit contexts of, in principle, unbounded length. We recast associative memory at the level of probability measures, treating a context as a distribution over tokens and viewing attention as an integral operator on measures. Concretely, for mixture contexts $nu = I^{-1} sum_{i=1}^I mu^{(i^*)}$ and a query $x_{mathrm{q}}(i^*)$, the task decomposes into (i) recall of the relevant component $mu^{(i^*)}$ and (ii) prediction from $(mu_{i^*},x_mathrm{q})$. We study learned softmax attention (not a frozen kernel) trained by empirical risk minimization and show that a shallow measure-theoretic Transformer composed with an MLP learns the recall-and-predict map under a spectral assumption on the input densities. We further establish a matching minimax lower bound with the same rate exponent (up to multiplicative constants), proving sharpness of the convergence order. The framework offers a principled recipe for designing and analyzing Transformers that recall from arbitrarily long, distributional contexts with provable generalization guarantees.

Liked Liked