CLLMs: Consistency Large Language Models

Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:25426-25440, 2024.

Abstract

Jacobi decoding shows promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into more parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point in a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-kou24a, title = {{CLLM}s: Consistency Large Language Models}, author = {Kou, Siqi and Hu, Lanxiang and He, Zhezhi and Deng, Zhijie and Zhang, Hao}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {25426--25440}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/kou24a/kou24a.pdf}, url = {https://proceedings.mlr.press/v235/kou24a.html}, abstract = {Jacobi decoding shows promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into more parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point in a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.} }
Endnote
%0 Conference Paper %T CLLMs: Consistency Large Language Models %A Siqi Kou %A Lanxiang Hu %A Zhezhi He %A Zhijie Deng %A Hao Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-kou24a %I PMLR %P 25426--25440 %U https://proceedings.mlr.press/v235/kou24a.html %V 235 %X Jacobi decoding shows promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into more parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point in a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.
APA
Kou, S., Hu, L., He, Z., Deng, Z. & Zhang, H.. (2024). CLLMs: Consistency Large Language Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:25426-25440 Available from https://proceedings.mlr.press/v235/kou24a.html.

Related Material