The Dimensionality Gap: Three Scaling Barriers to AGI on Von Neumann Architectures, and a Path Forward Through Neuromorphic-Photonic Substrates
The prevailing assumption in artificial general intelligence (AGI) research is that scaling current architectures—more parameters, more GPUs, more data—will eventually yield human-level general intelligence. We challenge this assumption by identifying three formal scaling barriers that arise from the structural mismatch between biological neural computation and von Neumann silicon. The brain is a three-dimensional, asynchronous, massively parallel graph of ∼86 billion nodes with average fan-out ∼7,000, operating at ∼20W. Contemporary processors are two-dimensional, clocked, and memorybottlenecked. We derive: (1) a communication complexity barrier showing that emulating the brain’s nonplanar connectivity on a 2D substrate incurs Ω(N2/3 · F) additional data movement per timestep; (2) a serialisation barrier showing that the required memory bandwidth exceeds 1015 bytes/s, roughly 103× current GPU capacity; and (3) an energy barrier showing that silicon emulation at brain scale requires ∼6MWunder current technology, versus 20Wfor biology. These are not engineering inconveniences— they are architectural incompatibilities that improve only polynomially with process scaling. We propose a constructive path forward: a hybrid architecture combining 3D-stacked neuromorphic silicon for local synaptic computation with integrated photonic interconnects for high-fan-out, lowenergy long-range communication. We specify quantitative targets and identify the critical technology gaps. The paper does not claim that brain emulation is sufficient for AGI, nor that current deep learning is without merit—only that the dominant hardware substrate is structurally mismatched to brain-scale neural computation in ways that scaling alone cannot overcome.