Navigating Scaling Laws: Compute Optimality in Adaptive Model Training

Sotiris Anagnostidis, Gregor Bachmann, Imanol Schlag, Thomas Hofmann
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:1511-1530, 2024.

Abstract

In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data. The paradigm is very simple: investing more computational resources (optimally) leads to better performance, and even predictably so; neural scaling laws have been derived that accurately forecast the performance of a network for a desired level of compute. This leads to the notion of a ’compute-optimal’ model, i.e. a model that allocates a given level of compute during training optimally to maximize performance. In this work, we extend the concept of optimality by allowing for an ’adaptive’ model, i.e. a model that can change its shape during training. By doing so, we can design adaptive models that optimally traverse between the underlying scaling laws and outpace their ‘static’ counterparts, leading to a significant reduction in the required compute to reach a given target performance. We show that our approach generalizes across modalities and different shape parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-anagnostidis24a, title = {Navigating Scaling Laws: Compute Optimality in Adaptive Model Training}, author = {Anagnostidis, Sotiris and Bachmann, Gregor and Schlag, Imanol and Hofmann, Thomas}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {1511--1530}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/anagnostidis24a/anagnostidis24a.pdf}, url = {https://proceedings.mlr.press/v235/anagnostidis24a.html}, abstract = {In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data. The paradigm is very simple: investing more computational resources (optimally) leads to better performance, and even predictably so; neural scaling laws have been derived that accurately forecast the performance of a network for a desired level of compute. This leads to the notion of a ’compute-optimal’ model, i.e. a model that allocates a given level of compute during training optimally to maximize performance. In this work, we extend the concept of optimality by allowing for an ’adaptive’ model, i.e. a model that can change its shape during training. By doing so, we can design adaptive models that optimally traverse between the underlying scaling laws and outpace their ‘static’ counterparts, leading to a significant reduction in the required compute to reach a given target performance. We show that our approach generalizes across modalities and different shape parameters.} }
Endnote
%0 Conference Paper %T Navigating Scaling Laws: Compute Optimality in Adaptive Model Training %A Sotiris Anagnostidis %A Gregor Bachmann %A Imanol Schlag %A Thomas Hofmann %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-anagnostidis24a %I PMLR %P 1511--1530 %U https://proceedings.mlr.press/v235/anagnostidis24a.html %V 235 %X In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data. The paradigm is very simple: investing more computational resources (optimally) leads to better performance, and even predictably so; neural scaling laws have been derived that accurately forecast the performance of a network for a desired level of compute. This leads to the notion of a ’compute-optimal’ model, i.e. a model that allocates a given level of compute during training optimally to maximize performance. In this work, we extend the concept of optimality by allowing for an ’adaptive’ model, i.e. a model that can change its shape during training. By doing so, we can design adaptive models that optimally traverse between the underlying scaling laws and outpace their ‘static’ counterparts, leading to a significant reduction in the required compute to reach a given target performance. We show that our approach generalizes across modalities and different shape parameters.
APA
Anagnostidis, S., Bachmann, G., Schlag, I. & Hofmann, T.. (2024). Navigating Scaling Laws: Compute Optimality in Adaptive Model Training. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:1511-1530 Available from https://proceedings.mlr.press/v235/anagnostidis24a.html.

Related Material