Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention

arXiv:2603.03310v1 Announce Type: new
Abstract: Modern large language model (LLM) inference engines optimize throughput and latency under fixed decoding rules, treating generation as a linear progression in token time. We propose a fundamentally different paradigm: entropic-time inference, where decoding is governed by the flow of uncertainty rather than token index. We introduce a self-organizing inference architecture that jointly couples scheduling, attention sparsification, and sampling temperature under a unified entropy control objective. Our method extends vLLM with entropy-aware scheduling, entropic pruning of paged attention blocks, and adaptive temperature control that stabilizes generation near a target entropy regime. This transforms inference into a resource-intelligent thermodynamic process that allocates computation where uncertainty reduction is maximized. We present a concrete systems design, pseudocode, and integration plan, demonstrating how entropy can serve as a first-class control signal for scalable LLM inference.

Liked Liked