A-IO: Adaptive Inference Orchestration for Memory-Bound NPUs
arXiv:2604.09752v1 Announce Type: new
Abstract: During the deployment of Large Language Models (LLMs), the autoregressive decoding phase on heterogeneous NPU platforms (e.g., Ascend 910B) faces severe memory-bound challenges. This study reveals the “Model Scaling Paradox” caused by the static deployment of single-sized models. It also points out the kernel synchronization overhead of fine-grained speculative decoding cite{leviathan2023fast, chen2023speculative} under NPU computational graph compilation, and the severe limitations of purely relying on micro-level acceleration algorithms like Prompt LookUp Decoding (PLD)
Like
0
Liked
Liked