Step-Size Decay and Structural Stagnation in Greedy Sparse Learning

Greedy algorithms are central to sparse approximation and stage-wise learning methods such as matching pursuit and boosting. It is known that the Power-Relaxed Greedy Algorithm with step sizes $m^{-α}$ may fail to converge when $α>1$ in general Hilbert spaces. In this work, we revisit this phenomenon from a sparse learning perspective. We study realizable regression problems with controlled feature coherence and derive explicit lower bounds on the residual norm, showing that over-decaying step-size schedules induce structural stagnation even in low-dimensional sparse settings. Numerical experiments confirm the theoretical predictions and illustrate the role of feature coherence. Our results provide insight into step-size design in greedy sparse learning.

Liked Liked