Adaptation to Intrinsic Dependence in Diffusion Language Models
arXiv:2602.20126v1 Announce Type: cross
Abstract: Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) approaches, enabling parallel token generation beyond a rigid left-to-right order. Despite growing empirical success, the theoretical understanding of how unmasking schedules — which specify the order and size of unmasked tokens during sampling — affect generation quality remains limited. In this work, we introduce a distribution-agnostic unmasking schedule for DLMs that adapts to the (unknown) dependence structure of the target data distribution, without requiring any prior knowledge or hyperparameter tuning. In contrast to prior deterministic procedures that fix unmasking sizes, our method randomizes the number of tokens revealed at each iteration. We show that, for two specific parameter choices, the sampling convergence guarantees — measured by Kullback-Leibler (KL) divergence — scale as $widetilde O(mathsf{TC}/K)$ and $widetilde O(mathsf{DTC}/K)$ respectively. Here, $K$ is the number of iterations, and $mathsf{TC}$ and $mathsf{DTC}$ are the total correlation and dual total correlation of the target distribution, capturing the intrinsic dependence structure underlying the data. Importantly, our guarantees hold in the practically relevant parallel-sampling regime $K