Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions

Jaeyeon Kim, Kulin Shah, Vasilis Kontonis, Sham M. Kakade, Sitan Chen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:30749-30768, 2025.

Abstract

In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$% to $\approx 90$%, even outperforming ARMs that were explicitly trained via teacher forcing to learn the right order of decoding.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kim25ah, title = {Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions}, author = {Kim, Jaeyeon and Shah, Kulin and Kontonis, Vasilis and Kakade, Sham M. and Chen, Sitan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {30749--30768}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kim25ah/kim25ah.pdf}, url = {https://proceedings.mlr.press/v267/kim25ah.html}, abstract = {In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$% to $\approx 90$%, even outperforming ARMs that were explicitly trained via teacher forcing to learn the right order of decoding.} }
Endnote
%0 Conference Paper %T Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions %A Jaeyeon Kim %A Kulin Shah %A Vasilis Kontonis %A Sham M. Kakade %A Sitan Chen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kim25ah %I PMLR %P 30749--30768 %U https://proceedings.mlr.press/v267/kim25ah.html %V 267 %X In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$% to $\approx 90$%, even outperforming ARMs that were explicitly trained via teacher forcing to learn the right order of decoding.
APA
Kim, J., Shah, K., Kontonis, V., Kakade, S.M. & Chen, S.. (2025). Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:30749-30768 Available from https://proceedings.mlr.press/v267/kim25ah.html.

Related Material