Dual Space Preconditioning for Gradient Descent in the Overparameterized Regime
arXiv:2603.10485v1 Announce Type: new
Abstract: In this work we study the convergence properties of the Dual Space Preconditioned Gradient Descent, encompassing optimizers such as Normalized Gradient Descent, Gradient Clipping and Adam. We consider preconditioners of the form $nabla K$, where $K: mathbb{R}^p to mathbb{R}$ is convex and assume that the latter is applied to train an over-parameterized linear model with loss of the form $ell({X} {W} – {Y})$, for weights ${W} in mathbb{R}^{d times k}$, labels ${Y} in mathbb{R}^{n times k}$ and data ${X} in mathbb{R}^{n times d}$. Under the aforementioned assumptions, we prove that the iterates of the preconditioned gradient descent always converge to a point ${W}_{infty} in mathbb{R}^{d times k}$ satisfying ${X}{W}_{infty} = {Y}$. Our proof techniques are of independent interest as we introduce a novel version of the Bregman Divergence with accompanying identities that allow us to establish convergence. We also study the implicit bias of Dual Space Preconditioned Gradient Descent. First, we demonstrate empirically that, for general $K(cdot)$, ${W}_infty$ depends on the chosen learning rate, hindering a precise characterization of the implicit bias. Then, for preconditioners of the form $K({G}) = h(|{G}|_F)$, known as textit{isotropic preconditioners}, we show that ${W}_infty$ minimizes $|{W}_infty – {W}_0|_F^2$ subject to ${X}{W}_infty = {Y}$, where ${W}_0$ is the initialization. Denoting the convergence point of GD initialized at ${W}_0$ by ${W}_{text{GD}, infty}$, we thus note ${W}_{infty} = {W}_{text{GD}, infty}$ for isotropic preconditioners. Finally, we show that a similar fact holds for general preconditioners up to a multiplicative constant, namely, $|{W}_0 – {W}_{infty}|_F le c |{W}_0 – {W}_{text{GD}, infty}|_F$ for a constant $c>0$.