A new Uncertainty Principle in Machine Learning
arXiv:2603.06634v1 Announce Type: new
Abstract: Many scientific problems in the context of machine learning can be reduced to the search of polynomial answers in appropriate variables. The Hevisidization of arbitrary polynomial is actually provided by one-and-the same two-layer expression. What prevents the use of this simple idea is the fatal degeneracy of the Heaviside and sigmoid expansions, which traps the steepest-descent evolution at the bottom of canyons, close to the starting point, but far from the desired true minimum. This problem is unavoidable and can be formulated as a peculiar uncertainty principle — the sharper the minimum, the smoother the canyons. It is a direct analogue of the usual one, which is the pertinent property of the more familiar Fourier expansion. Standard machine learning software fights with this problem empirically, for example, by testing evolutions, originated at randomly distributed starting points and then selecting the best one. Surprisingly or not, phenomena and problems, encountered in ML application to science are pure scientific and belong to physics, not to computer science. On the other hand, they sound slightly different and shed new light on the well-known phenomena — for example, extend the uncertainty principle from Fourier and, later, wavelet analysis to a new peculiar class of nearly singular sigmoid functions.