All ERMs Can Fail in Stochastic Convex Optimization Lower Bounds in Linear Dimension
arXiv:2602.08350v1 Announce Type: cross
Abstract: We study the sample complexity of the best-case Empirical Risk Minimizer in the setting of stochastic convex optimization. We show that there exists an instance in which the sample size is linear in the dimension, learning is possible, but the Empirical Risk Minimizer is likely to be unique and to overfit. This resolves an open question by Feldman. We also extend this to approximate ERMs.
Building on our construction we also show that (constrained) Gradient Descent potentially overfits when horizon and learning rate grow w.r.t sample size. Specifically we provide a novel generalization lower bound of $Omegaleft(sqrt{eta T/m^{1.5}}right)$ for Gradient Descent, where $eta$ is the learning rate, $T$ is the horizon and $m$ is the sample size. This narrows down, exponentially, the gap between the best known upper bound of $O(eta T/m)$ and existing lower bounds from previous constructions.