Benchmarking the Energy Savings with Speculative Decoding Strategies
arXiv:2602.09113v1 Announce Type: new
Abstract: Speculative decoding has emerged as an effective method to reduce latency and inference cost of LLM inferences. However, there has been inadequate attention towards the energy requirements of these models. To address this gap, this paper presents a comprehensive survey of energy requirements of speculative decoding strategies, with detailed analysis on how various factors — model size and family, speculative decoding strategies, and dataset characteristics — influence the energy optimizations.
Like
0
Liked
Liked