The Supervision Paradox: AI Capability Growth Necessitates Usage Contraction in High-Loss Domains
Current AI governance rests on an implicit assumption: that human-in-the-loop oversight can scale safely alongside generative AI systems. We argue that this assumption is structurally untenable. Three constraints hold simultaneously: legal liability remains with humans, human cognitive throughput has a biological ceiling, and economic pressures drive AI output velocity beyond that ceiling. When output velocity exceeds human processing limits, oversight becomes nominal and humans are reduced to what Elish (2019) termed “moral crumple zones”. Unlike physical automation, where anomaly criteria are externally defined, generative AI requires supervisors to evaluate cognitive products (text, reasoning, analysis) against internally held standards. Through predictive-error minimization, repeated exposure to AI output patterns recalibrates these internal standards, degrading anomaly detection even when supervisors remain attentive. This degradation renders error detection structurally deficient, causing the penalty function that should restrain output expansion to remain dormant. Risks therefore accumulate invisibly and manifest as threshold shocks rather than gradual corrections. Under these dynamics, expected loss diverges with increasing output velocity, and the irreducibility of error probability in probabilistic systems ensures that model capability improvements cannot offset this divergence. We derive that rate-limiting AI output to within human processing capacity is the variable available for bounding expected loss, and propose a flow-design governance paradigm as a principled alternative to supervision-enhancing approaches. Specifically, we outline hard caps on daily case loads, batch-approval prohibition, mandatory friction in approval interfaces, and adoption-rate ceilings. The theoretical consequence is counterintuitive: as generative AI capability grows, its autonomous use in high-loss domains will contract.