Structured vs. Unstructured Pruning: An Exponential Gap
arXiv:2603.02234v1 Announce Type: new
Abstract: The Strong Lottery Ticket Hypothesis (SLTH) posits that large, randomly initialized neural networks contain sparse subnetworks capable of approximating a target function at initialization without training, suggesting that pruning alone is sufficient. Pruning methods are typically classified as unstructured, where individual weights can be removed from the network, and structured, where parameters are removed according to specific patterns, as in neuron pruning. Existing theoretical results supporting the SLTH rely almost exclusively on unstructured pruning, showing that logarithmic overparameterization suffices to approximate simple target networks. In contrast, neuron pruning has received limited theoretical attention. In this work, we consider the problem of approximating a single bias-free ReLU neuron using a randomly initialized bias-free two-layer ReLU network, thereby isolating the intrinsic limitations of neuron pruning. We show that neuron pruning requires a starting network with $Omega(d/varepsilon)$ hidden neurons to $varepsilon$-approximate a target ReLU neuron. In contrast, weight pruning achieves $varepsilon$-approximation with only $O(dlog(1/varepsilon))$ neurons, establishing an exponential separation between the two pruning paradigms.