Better & Faster Large Language Models via Multi-token Prediction

Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, Gabriel Synnaeve
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:15706-15734, 2024.

Abstract

Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following $n$ tokens using $n$ independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12% more problems on Human Eval and 17% more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to $3\times$ faster at inference, even with large batch sizes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-gloeckle24a, title = {Better & Faster Large Language Models via Multi-token Prediction}, author = {Gloeckle, Fabian and Youbi Idrissi, Badr and Roziere, Baptiste and Lopez-Paz, David and Synnaeve, Gabriel}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {15706--15734}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/gloeckle24a/gloeckle24a.pdf}, url = {https://proceedings.mlr.press/v235/gloeckle24a.html}, abstract = {Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following $n$ tokens using $n$ independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12% more problems on Human Eval and 17% more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to $3\times$ faster at inference, even with large batch sizes.} }
Endnote
%0 Conference Paper %T Better & Faster Large Language Models via Multi-token Prediction %A Fabian Gloeckle %A Badr Youbi Idrissi %A Baptiste Roziere %A David Lopez-Paz %A Gabriel Synnaeve %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-gloeckle24a %I PMLR %P 15706--15734 %U https://proceedings.mlr.press/v235/gloeckle24a.html %V 235 %X Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following $n$ tokens using $n$ independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12% more problems on Human Eval and 17% more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to $3\times$ faster at inference, even with large batch sizes.
APA
Gloeckle, F., Youbi Idrissi, B., Roziere, B., Lopez-Paz, D. & Synnaeve, G.. (2024). Better & Faster Large Language Models via Multi-token Prediction. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:15706-15734 Available from https://proceedings.mlr.press/v235/gloeckle24a.html.

Related Material