Let the Optimizers Optimize Themselves

arXiv:2512.06370v2 Announce Type: replace-cross
Abstract: We lay the theoretical foundation for automating optimizer design in gradient-based learning. Based on the greedy principle, we formulate the problem of designing optimizers and their hyperparameters as maximizing the instantaneous decrease in loss. By treating an optimizer as a function that translates loss gradient signals into parameter motions, the problem reduces to a family of convex optimization problems over the space of optimizers. Solving these problems under various constraints not only recovers a wide range of popular optimizers as closed-form solutions, but also produces the optimal hyperparameters of these optimizers with respect to the problems at hand. This enables a systematic approach to design optimizers and tune their hyperparameters dynamically according to the gradient statistics that are collected during the training process. Furthermore, this optimization of optimization can be performed dynamically during training.

Liked Liked