RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts

Tuan Truong, Chau Nguyen, Huy Nguyen, Minh Le, Trung Le, Nhat Ho
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:60183-60217, 2025.

Abstract

Low-rank Adaptation (LoRA) has emerged as a powerful and efficient method for fine-tuning large-scale foundation models. Despite its popularity, the theoretical understanding of LoRA has remained underexplored. In this paper, we present a theoretical analysis of LoRA by examining its connection to the Mixture of Experts models. Under this framework, we show that a simple technique, reparameterizing LoRA matrices, can notably accelerate the low-rank matrix estimation process. In particular, we prove that reparameterization can reduce the data needed to achieve a desired estimation error from an exponential to a polynomial scale. Motivated by this insight, we propose Reparameterized Low-Rank Adaptation (RepLoRA), incorporating a lightweight MLP to reparameterize the LoRA matrices. Extensive experiments across multiple domains demonstrate that RepLoRA consistently outperforms vanilla LoRA. With limited data, RepLoRA surpasses LoRA by a substantial margin of up to 40.0% and achieves LoRA’s performance using only 30.0% of the training data, highlighting the theoretical and empirical robustness of our PEFT method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-truong25a, title = {{R}ep{L}o{RA}: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts}, author = {Truong, Tuan and Nguyen, Chau and Nguyen, Huy and Le, Minh and Le, Trung and Ho, Nhat}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {60183--60217}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/truong25a/truong25a.pdf}, url = {https://proceedings.mlr.press/v267/truong25a.html}, abstract = {Low-rank Adaptation (LoRA) has emerged as a powerful and efficient method for fine-tuning large-scale foundation models. Despite its popularity, the theoretical understanding of LoRA has remained underexplored. In this paper, we present a theoretical analysis of LoRA by examining its connection to the Mixture of Experts models. Under this framework, we show that a simple technique, reparameterizing LoRA matrices, can notably accelerate the low-rank matrix estimation process. In particular, we prove that reparameterization can reduce the data needed to achieve a desired estimation error from an exponential to a polynomial scale. Motivated by this insight, we propose Reparameterized Low-Rank Adaptation (RepLoRA), incorporating a lightweight MLP to reparameterize the LoRA matrices. Extensive experiments across multiple domains demonstrate that RepLoRA consistently outperforms vanilla LoRA. With limited data, RepLoRA surpasses LoRA by a substantial margin of up to 40.0% and achieves LoRA’s performance using only 30.0% of the training data, highlighting the theoretical and empirical robustness of our PEFT method.} }
Endnote
%0 Conference Paper %T RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts %A Tuan Truong %A Chau Nguyen %A Huy Nguyen %A Minh Le %A Trung Le %A Nhat Ho %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-truong25a %I PMLR %P 60183--60217 %U https://proceedings.mlr.press/v267/truong25a.html %V 267 %X Low-rank Adaptation (LoRA) has emerged as a powerful and efficient method for fine-tuning large-scale foundation models. Despite its popularity, the theoretical understanding of LoRA has remained underexplored. In this paper, we present a theoretical analysis of LoRA by examining its connection to the Mixture of Experts models. Under this framework, we show that a simple technique, reparameterizing LoRA matrices, can notably accelerate the low-rank matrix estimation process. In particular, we prove that reparameterization can reduce the data needed to achieve a desired estimation error from an exponential to a polynomial scale. Motivated by this insight, we propose Reparameterized Low-Rank Adaptation (RepLoRA), incorporating a lightweight MLP to reparameterize the LoRA matrices. Extensive experiments across multiple domains demonstrate that RepLoRA consistently outperforms vanilla LoRA. With limited data, RepLoRA surpasses LoRA by a substantial margin of up to 40.0% and achieves LoRA’s performance using only 30.0% of the training data, highlighting the theoretical and empirical robustness of our PEFT method.
APA
Truong, T., Nguyen, C., Nguyen, H., Le, M., Le, T. & Ho, N.. (2025). RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:60183-60217 Available from https://proceedings.mlr.press/v267/truong25a.html.

Related Material