Speculative Decoding for Multimodal Models: A Survey

Multimodal generative models have demonstrated remarkable capabilities in visual understanding, audio synthesis, and embodied control. Such capabilities, however, come with substantial inference overhead due to autoregressive decoding or iterative generation processes, compounded by modality-specific challenges including extensive visual token redundancy, strict real-time latency in robotic control, and prolonged sequential generation in text-to-image synthesis. Speculative decoding has emerged as a promising paradigm to accelerate inference without degrading output quality, yet existing surveys remain focused on text-only large language models. In this survey, we provide a systematic and comprehensive review of speculative decoding methods for multimodal models, spanning Vision–Language, Text-to-Image, Vision–Language–Action, Video–Language, Speech, and Diffusion models. We organize the literature in a unified taxonomy consisting of two primary axes, covering the emph{draft generation stage} and the emph{verification and acceptance stage}, complemented by an analysis of inference framework support. Through this taxonomy, we identify recurring design patterns, including token compression, target-informed transfer, and relaxed acceptance, and examine how successful techniques transfer across modalities. We further provide a systematic comparison of existing methods under both self-reported and standardized benchmarking settings. Finally, we discuss open challenges and outline future directions. We have also created a GitHub repository where we organize the papers featured in this survey at https://github.com/zyfzs0/Multimodal-Models-Speculative-Decoding-Survey, and will actively maintain it to incorporate new research as it emerges. We hope this survey can serve as a valuable resource for researchers and practitioners working on accelerating multimodal inference.

Liked Liked