LoRA+: Efficient Low Rank Adaptation of Large Models

Soufiane Hayou, Nikhil Ghosh, Bin Yu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:17783-17806, 2024.

Abstract

In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in (Hu et al., 2021) leads to suboptimal finetuning of models with large width. This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate in ADAM. Using scaling arguments for large width networks, we demonstrate that the same learning rate does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen fixed ratio. We call this proposed algorithm LoRA+. In our extensive experiments, LoRA+ improves finetuning speed (up to ∼ 2X SpeedUp) and performance (1% − 2% improvements), at the same computational cost as LoRA. The code is available at https://github.com/nikhil-ghosh-berkeley/loraplus

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-hayou24a, title = {{L}o{RA}+: Efficient Low Rank Adaptation of Large Models}, author = {Hayou, Soufiane and Ghosh, Nikhil and Yu, Bin}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {17783--17806}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/hayou24a/hayou24a.pdf}, url = {https://proceedings.mlr.press/v235/hayou24a.html}, abstract = {In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in (Hu et al., 2021) leads to suboptimal finetuning of models with large width. This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate in ADAM. Using scaling arguments for large width networks, we demonstrate that the same learning rate does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen fixed ratio. We call this proposed algorithm LoRA+. In our extensive experiments, LoRA+ improves finetuning speed (up to ∼ 2X SpeedUp) and performance (1% − 2% improvements), at the same computational cost as LoRA. The code is available at https://github.com/nikhil-ghosh-berkeley/loraplus} }
Endnote
%0 Conference Paper %T LoRA+: Efficient Low Rank Adaptation of Large Models %A Soufiane Hayou %A Nikhil Ghosh %A Bin Yu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-hayou24a %I PMLR %P 17783--17806 %U https://proceedings.mlr.press/v235/hayou24a.html %V 235 %X In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in (Hu et al., 2021) leads to suboptimal finetuning of models with large width. This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate in ADAM. Using scaling arguments for large width networks, we demonstrate that the same learning rate does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen fixed ratio. We call this proposed algorithm LoRA+. In our extensive experiments, LoRA+ improves finetuning speed (up to ∼ 2X SpeedUp) and performance (1% − 2% improvements), at the same computational cost as LoRA. The code is available at https://github.com/nikhil-ghosh-berkeley/loraplus
APA
Hayou, S., Ghosh, N. & Yu, B.. (2024). LoRA+: Efficient Low Rank Adaptation of Large Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:17783-17806 Available from https://proceedings.mlr.press/v235/hayou24a.html.

Related Material