RGP: Achieving Memory-Efficient Model Fine-tuning Via Randomized Gradient Projection

Ali Saheb Pasand, Pouya Bashivan
Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, PMLR 262:47-54, 2024.

Abstract

Training and fine-tuning Large Language Models (LLMs) require significant memory due to the substantial growth in the size of weight parameters and optimizer states. While methods like low-rank adaptation (LoRA), which introduce low-rank trainable modules in parallel to frozen pre-trained weights, effectively reduce memory usage, they often fail to preserve the optimization trajectory and are generally less effective for pre-training models. On the other hand, approaches, such as GaLore, that project gradients onto lower-dimensional spaces maintain the training trajectory and perform well in pre-training but suffer from high computational complexity, as they require repeated singular value decomposition on large matrices. In this work, we propose Randomized Gradient Projection (RGP), which outperforms GaLore, the current state-of-the-art in efficient fine-tuning, on the GLUE task suite, while being 74% faster on average and requiring similar memory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v262-saheb-pasand24a, title = {{RGP}: Achieving Memory-Efficient Model Fine-tuning Via Randomized Gradient Projection}, author = {Saheb Pasand, Ali and Bashivan, Pouya}, booktitle = {Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop}, pages = {47--54}, year = {2024}, editor = {Rezagholizadeh, Mehdi and Passban, Peyman and Samiee, Soheila and Partovi Nia, Vahid and Cheng, Yu and Deng, Yue and Liu, Qun and Chen, Boxing}, volume = {262}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v262/main/assets/saheb-pasand24a/saheb-pasand24a.pdf}, url = {https://proceedings.mlr.press/v262/saheb-pasand24a.html}, abstract = {Training and fine-tuning Large Language Models (LLMs) require significant memory due to the substantial growth in the size of weight parameters and optimizer states. While methods like low-rank adaptation (LoRA), which introduce low-rank trainable modules in parallel to frozen pre-trained weights, effectively reduce memory usage, they often fail to preserve the optimization trajectory and are generally less effective for pre-training models. On the other hand, approaches, such as GaLore, that project gradients onto lower-dimensional spaces maintain the training trajectory and perform well in pre-training but suffer from high computational complexity, as they require repeated singular value decomposition on large matrices. In this work, we propose Randomized Gradient Projection (RGP), which outperforms GaLore, the current state-of-the-art in efficient fine-tuning, on the GLUE task suite, while being 74% faster on average and requiring similar memory.} }
Endnote
%0 Conference Paper %T RGP: Achieving Memory-Efficient Model Fine-tuning Via Randomized Gradient Projection %A Ali Saheb Pasand %A Pouya Bashivan %B Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop %C Proceedings of Machine Learning Research %D 2024 %E Mehdi Rezagholizadeh %E Peyman Passban %E Soheila Samiee %E Vahid Partovi Nia %E Yu Cheng %E Yue Deng %E Qun Liu %E Boxing Chen %F pmlr-v262-saheb-pasand24a %I PMLR %P 47--54 %U https://proceedings.mlr.press/v262/saheb-pasand24a.html %V 262 %X Training and fine-tuning Large Language Models (LLMs) require significant memory due to the substantial growth in the size of weight parameters and optimizer states. While methods like low-rank adaptation (LoRA), which introduce low-rank trainable modules in parallel to frozen pre-trained weights, effectively reduce memory usage, they often fail to preserve the optimization trajectory and are generally less effective for pre-training models. On the other hand, approaches, such as GaLore, that project gradients onto lower-dimensional spaces maintain the training trajectory and perform well in pre-training but suffer from high computational complexity, as they require repeated singular value decomposition on large matrices. In this work, we propose Randomized Gradient Projection (RGP), which outperforms GaLore, the current state-of-the-art in efficient fine-tuning, on the GLUE task suite, while being 74% faster on average and requiring similar memory.
APA
Saheb Pasand, A. & Bashivan, P.. (2024). RGP: Achieving Memory-Efficient Model Fine-tuning Via Randomized Gradient Projection. Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, in Proceedings of Machine Learning Research 262:47-54 Available from https://proceedings.mlr.press/v262/saheb-pasand24a.html.

Related Material