Hidden Minima in Two-Layer ReLU Networks

arXiv:2312.16819v4 Announce Type: replace-cross
Abstract: We consider the optimization problem associated with training two-layer ReLU networks with (d) inputs under the squared loss, where the labels are generated by a target network. Recent work has identified two distinct classes of infinite families of minima: one whose training loss vanishes in the high-dimensional limit, and another whose loss remains bounded away from zero. The latter family is empirically avoided by stochastic gradient descent, hence emph{hidden}, motivating the search for analytic criteria that distinguish hidden from non-hidden minima. A key challenge is that prior analyses have shown the Hessian spectra at hidden and non-hidden minima to coincide up to terms of order (O(d^{-1/2})), seemingly limiting the discriminative power of spectral methods. We therefore take a different route, studying instead certain curves along which the loss is locally minimized. Our main result shows that arcs emanating from hidden minima exhibit distinctive structural and symmetry properties, arising precisely from (Omega(d^{-1/2})) eigenvalue contributions that are absent from earlier analyses.

Liked Liked