Scaling Smart: Accelerating Large Language Model Pre-Training with Small Model Initialization

Mohammad Samragh, Seyed Iman Mirzadeh, Keivan Alizadeh-Vahid, Fartash Faghri, Minsik Cho, Moin Nabi, Devang Naik, Mehrdad Farajtabar
Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, PMLR 262:1-13, 2024.

Abstract

The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy of large models. In this paper, we explore an intriguing idea to connect these two different regimes: Can we develop a method to initialize large language models using smaller pre-trained models? Will such initialization bring any benefits in terms of training time and final accuracy? In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions. Our method ensures that the larger model retains the functionality of the smaller model. As a result, the larger model already inherits the predictive power and accuracy of the smaller model before the training starts. We demonstrate that training such an initialized model results in significant savings in terms of GPU hours required for pre-training large language models. Implementation of HyperCloning is available at https://github.com/apple/ml-hypercloning/tree/main.

Cite this Paper


BibTeX
@InProceedings{pmlr-v262-samragh24a, title = {Scaling Smart: Accelerating Large Language Model Pre-Training with Small Model Initialization}, author = {Samragh, Mohammad and Mirzadeh, Seyed Iman and Alizadeh-Vahid, Keivan and Faghri, Fartash and Cho, Minsik and Nabi, Moin and Naik, Devang and Farajtabar, Mehrdad}, booktitle = {Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop}, pages = {1--13}, year = {2024}, editor = {Rezagholizadeh, Mehdi and Passban, Peyman and Samiee, Soheila and Partovi Nia, Vahid and Cheng, Yu and Deng, Yue and Liu, Qun and Chen, Boxing}, volume = {262}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v262/main/assets/samragh24a/samragh24a.pdf}, url = {https://proceedings.mlr.press/v262/samragh24a.html}, abstract = {The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy of large models. In this paper, we explore an intriguing idea to connect these two different regimes: Can we develop a method to initialize large language models using smaller pre-trained models? Will such initialization bring any benefits in terms of training time and final accuracy? In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions. Our method ensures that the larger model retains the functionality of the smaller model. As a result, the larger model already inherits the predictive power and accuracy of the smaller model before the training starts. We demonstrate that training such an initialized model results in significant savings in terms of GPU hours required for pre-training large language models. Implementation of HyperCloning is available at https://github.com/apple/ml-hypercloning/tree/main.} }
Endnote
%0 Conference Paper %T Scaling Smart: Accelerating Large Language Model Pre-Training with Small Model Initialization %A Mohammad Samragh %A Seyed Iman Mirzadeh %A Keivan Alizadeh-Vahid %A Fartash Faghri %A Minsik Cho %A Moin Nabi %A Devang Naik %A Mehrdad Farajtabar %B Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop %C Proceedings of Machine Learning Research %D 2024 %E Mehdi Rezagholizadeh %E Peyman Passban %E Soheila Samiee %E Vahid Partovi Nia %E Yu Cheng %E Yue Deng %E Qun Liu %E Boxing Chen %F pmlr-v262-samragh24a %I PMLR %P 1--13 %U https://proceedings.mlr.press/v262/samragh24a.html %V 262 %X The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy of large models. In this paper, we explore an intriguing idea to connect these two different regimes: Can we develop a method to initialize large language models using smaller pre-trained models? Will such initialization bring any benefits in terms of training time and final accuracy? In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions. Our method ensures that the larger model retains the functionality of the smaller model. As a result, the larger model already inherits the predictive power and accuracy of the smaller model before the training starts. We demonstrate that training such an initialized model results in significant savings in terms of GPU hours required for pre-training large language models. Implementation of HyperCloning is available at https://github.com/apple/ml-hypercloning/tree/main.
APA
Samragh, M., Mirzadeh, S.I., Alizadeh-Vahid, K., Faghri, F., Cho, M., Nabi, M., Naik, D. & Farajtabar, M.. (2024). Scaling Smart: Accelerating Large Language Model Pre-Training with Small Model Initialization. Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, in Proceedings of Machine Learning Research 262:1-13 Available from https://proceedings.mlr.press/v262/samragh24a.html.

Related Material