ZipAR: Parallel Autoregressive Image Generation through Spatial Locality

Yefei He, Feng Chen, Yuanyu He, Shaoxuan He, Hong Zhou, Kaipeng Zhang, Bohan Zhuang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:22368-22378, 2025.

Abstract

In this paper, we propose ZipAR, a training-free, plug-and-play parallel decoding framework for accelerating autoregressive (AR) visual generation. The motivation stems from the observation that images exhibit local structures, and spatially distant regions tend to have minimal interdependence. Given a partially decoded set of visual tokens, in addition to the original next-token prediction scheme in the row dimension, the tokens corresponding to spatially adjacent regions in the column dimension can be decoded in parallel. To ensure alignment with the contextual requirements of each token, we employ an adaptive local window assignment scheme with rejection sampling analogous to speculative decoding. By decoding multiple tokens in a single forward pass, the number of forward passes required to generate an image is significantly reduced, resulting in a substantial improvement in generation efficiency. Experiments demonstrate that ZipAR can reduce the number of model forward passes by up to 91% on the Emu3-Gen model without requiring any additional retraining.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-he25b, title = {{Z}ip{AR}: Parallel Autoregressive Image Generation through Spatial Locality}, author = {He, Yefei and Chen, Feng and He, Yuanyu and He, Shaoxuan and Zhou, Hong and Zhang, Kaipeng and Zhuang, Bohan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {22368--22378}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/he25b/he25b.pdf}, url = {https://proceedings.mlr.press/v267/he25b.html}, abstract = {In this paper, we propose ZipAR, a training-free, plug-and-play parallel decoding framework for accelerating autoregressive (AR) visual generation. The motivation stems from the observation that images exhibit local structures, and spatially distant regions tend to have minimal interdependence. Given a partially decoded set of visual tokens, in addition to the original next-token prediction scheme in the row dimension, the tokens corresponding to spatially adjacent regions in the column dimension can be decoded in parallel. To ensure alignment with the contextual requirements of each token, we employ an adaptive local window assignment scheme with rejection sampling analogous to speculative decoding. By decoding multiple tokens in a single forward pass, the number of forward passes required to generate an image is significantly reduced, resulting in a substantial improvement in generation efficiency. Experiments demonstrate that ZipAR can reduce the number of model forward passes by up to 91% on the Emu3-Gen model without requiring any additional retraining.} }
Endnote
%0 Conference Paper %T ZipAR: Parallel Autoregressive Image Generation through Spatial Locality %A Yefei He %A Feng Chen %A Yuanyu He %A Shaoxuan He %A Hong Zhou %A Kaipeng Zhang %A Bohan Zhuang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-he25b %I PMLR %P 22368--22378 %U https://proceedings.mlr.press/v267/he25b.html %V 267 %X In this paper, we propose ZipAR, a training-free, plug-and-play parallel decoding framework for accelerating autoregressive (AR) visual generation. The motivation stems from the observation that images exhibit local structures, and spatially distant regions tend to have minimal interdependence. Given a partially decoded set of visual tokens, in addition to the original next-token prediction scheme in the row dimension, the tokens corresponding to spatially adjacent regions in the column dimension can be decoded in parallel. To ensure alignment with the contextual requirements of each token, we employ an adaptive local window assignment scheme with rejection sampling analogous to speculative decoding. By decoding multiple tokens in a single forward pass, the number of forward passes required to generate an image is significantly reduced, resulting in a substantial improvement in generation efficiency. Experiments demonstrate that ZipAR can reduce the number of model forward passes by up to 91% on the Emu3-Gen model without requiring any additional retraining.
APA
He, Y., Chen, F., He, Y., He, S., Zhou, H., Zhang, K. & Zhuang, B.. (2025). ZipAR: Parallel Autoregressive Image Generation through Spatial Locality. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:22368-22378 Available from https://proceedings.mlr.press/v267/he25b.html.

Related Material