Scaling Laws for Fine-Grained Mixture of Experts

Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygóźdź, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:33270-33288, 2024.

Abstract

Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, the modification of which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-ludziejewski24a, title = {Scaling Laws for Fine-Grained Mixture of Experts}, author = {Ludziejewski, Jan and Krajewski, Jakub and Adamczewski, Kamil and Pi\'{o}ro, Maciej and Krutul, Micha{\l} and Antoniak, Szymon and Ciebiera, Kamil and Kr\'{o}l, Krystian and Odrzyg\'{o}\'{z}d\'{z}, Tomasz and Sankowski, Piotr and Cygan, Marek and Jaszczur, Sebastian}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {33270--33288}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/ludziejewski24a/ludziejewski24a.pdf}, url = {https://proceedings.mlr.press/v235/ludziejewski24a.html}, abstract = {Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, the modification of which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.} }
Endnote
%0 Conference Paper %T Scaling Laws for Fine-Grained Mixture of Experts %A Jan Ludziejewski %A Jakub Krajewski %A Kamil Adamczewski %A Maciej Pióro %A Michał Krutul %A Szymon Antoniak %A Kamil Ciebiera %A Krystian Król %A Tomasz Odrzygóźdź %A Piotr Sankowski %A Marek Cygan %A Sebastian Jaszczur %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-ludziejewski24a %I PMLR %P 33270--33288 %U https://proceedings.mlr.press/v235/ludziejewski24a.html %V 235 %X Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, the modification of which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.
APA
Ludziejewski, J., Krajewski, J., Adamczewski, K., Pióro, M., Krutul, M., Antoniak, S., Ciebiera, K., Król, K., Odrzygóźdź, T., Sankowski, P., Cygan, M. & Jaszczur, S.. (2024). Scaling Laws for Fine-Grained Mixture of Experts. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:33270-33288 Available from https://proceedings.mlr.press/v235/ludziejewski24a.html.

Related Material