Is the Number of Trainable Parameters All That Actually Matters?

Amélie Chatelain, Amine Djeghri, Daniel Hesslow, Julien Launay
Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, PMLR 163:27-32, 2022.

Abstract

Recent work has identified simple empirical scaling laws for language models, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by doping the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters; scaling laws cannot be deceived by spurious parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v163-chatelain22a, title = {Is the Number of Trainable Parameters All That Actually Matters?}, author = {Chatelain, Am\'{e}lie and Djeghri, Amine and Hesslow, Daniel and Launay, Julien}, booktitle = {Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops}, pages = {27--32}, year = {2022}, editor = {Pradier, Melanie F. and Schein, Aaron and Hyland, Stephanie and Ruiz, Francisco J. R. and Forde, Jessica Z.}, volume = {163}, series = {Proceedings of Machine Learning Research}, month = {13 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v163/chatelain22a/chatelain22a.pdf}, url = {https://proceedings.mlr.press/v163/chatelain22a.html}, abstract = {Recent work has identified simple empirical scaling laws for language models, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by doping the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters; scaling laws cannot be deceived by spurious parameters.} }
Endnote
%0 Conference Paper %T Is the Number of Trainable Parameters All That Actually Matters? %A Amélie Chatelain %A Amine Djeghri %A Daniel Hesslow %A Julien Launay %B Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops %C Proceedings of Machine Learning Research %D 2022 %E Melanie F. Pradier %E Aaron Schein %E Stephanie Hyland %E Francisco J. R. Ruiz %E Jessica Z. Forde %F pmlr-v163-chatelain22a %I PMLR %P 27--32 %U https://proceedings.mlr.press/v163/chatelain22a.html %V 163 %X Recent work has identified simple empirical scaling laws for language models, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by doping the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters; scaling laws cannot be deceived by spurious parameters.
APA
Chatelain, A., Djeghri, A., Hesslow, D. & Launay, J.. (2022). Is the Number of Trainable Parameters All That Actually Matters?. Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, in Proceedings of Machine Learning Research 163:27-32 Available from https://proceedings.mlr.press/v163/chatelain22a.html.

Related Material