The LLM Mirage: Economic Interests and the Subversion of Weaponization Controls
arXiv:2601.05307v1 Announce Type: new Abstract: U.S. AI security policy is increasingly shaped by an $textit{LLM Mirage}$, the belief that national security risks scale in proportion to the compute used to train frontier language models. That premise fails in two ways. It miscalibrates strategy because adversaries can obtain weaponizable capabilities with task-specific systems that use specialized data, algorithmic efficiency, and widely available hardware, while compute controls harden only a high-end perimeter. It also destabilizes regulation because, absent a settled […]