On the Loss Landscape Geometry of Regularized Deep Matrix Factorization: Uniqueness and Sharpness
arXiv:2603.27072v1 Announce Type: new Abstract: Weight decay is ubiquitous in training deep neural network architectures. Its empirical success is often attributed to capacity control; nonetheless, our theoretical understanding of its effect on the loss landscape and the set of minimizers remains limited. In this paper, we show that $ell^2$-regularized deep matrix factorization/deep linear network training problems with squared-error loss admit a unique end-to-end minimizer for all target matrices subject to factorization, except for a set of Lebesgue measure […]