Geometric separation and constructive universal approximation with two hidden layers
arXiv:2602.12482v1 Announce Type: new
Abstract: We give a geometric construction of neural networks that separate disjoint compact subsets of $Bbb R^n$, and use it to obtain a constructive universal approximation theorem. Specifically, we show that networks with two hidden layers and either a sigmoidal activation (i.e., strictly monotone bounded continuous) or the ReLU activation can approximate any real-valued continuous function on an arbitrary compact set $KsubsetBbb R^n$ to any prescribed accuracy in the uniform norm. For finite $K$, the construction simplifies and yields a sharp depth-2 (single hidden layer) approximation result.
Like
0
Liked
Liked