LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits

Zikai Zhou, Qizheng Zhang, Hermann Kumbong, Kunle Olukotun
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:79570-79594, 2025.

Abstract

Fine-tuning large language models (LLMs) is increasingly costly as models scale to hundreds of billions of parameters, and even parameter-efficient fine-tuning (PEFT) methods like LoRA remain resource-intensive. We introduce LowRA, the first framework to enable LoRA fine-tuning below 2 bits per parameter with minimal performance loss. LowRA optimizes fine-grained quantization—mapping, threshold selection, and precision assignment—while leveraging efficient CUDA kernels for scalable deployment. Extensive evaluations across 4 LLMs and 4 datasets show that LowRA achieves a superior performance–precision trade-off above 2 bits and remains accurate down to 1.15 bits, reducing memory usage by up to 50%. Our results highlight the potential of ultra-low-bit LoRA fine-tuning for resource-constrained environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhou25ak, title = {{L}ow{RA}: Accurate and Efficient {L}o{RA} Fine-Tuning of {LLM}s under 2 Bits}, author = {Zhou, Zikai and Zhang, Qizheng and Kumbong, Hermann and Olukotun, Kunle}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {79570--79594}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhou25ak/zhou25ak.pdf}, url = {https://proceedings.mlr.press/v267/zhou25ak.html}, abstract = {Fine-tuning large language models (LLMs) is increasingly costly as models scale to hundreds of billions of parameters, and even parameter-efficient fine-tuning (PEFT) methods like LoRA remain resource-intensive. We introduce LowRA, the first framework to enable LoRA fine-tuning below 2 bits per parameter with minimal performance loss. LowRA optimizes fine-grained quantization—mapping, threshold selection, and precision assignment—while leveraging efficient CUDA kernels for scalable deployment. Extensive evaluations across 4 LLMs and 4 datasets show that LowRA achieves a superior performance–precision trade-off above 2 bits and remains accurate down to 1.15 bits, reducing memory usage by up to 50%. Our results highlight the potential of ultra-low-bit LoRA fine-tuning for resource-constrained environments.} }
Endnote
%0 Conference Paper %T LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits %A Zikai Zhou %A Qizheng Zhang %A Hermann Kumbong %A Kunle Olukotun %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhou25ak %I PMLR %P 79570--79594 %U https://proceedings.mlr.press/v267/zhou25ak.html %V 267 %X Fine-tuning large language models (LLMs) is increasingly costly as models scale to hundreds of billions of parameters, and even parameter-efficient fine-tuning (PEFT) methods like LoRA remain resource-intensive. We introduce LowRA, the first framework to enable LoRA fine-tuning below 2 bits per parameter with minimal performance loss. LowRA optimizes fine-grained quantization—mapping, threshold selection, and precision assignment—while leveraging efficient CUDA kernels for scalable deployment. Extensive evaluations across 4 LLMs and 4 datasets show that LowRA achieves a superior performance–precision trade-off above 2 bits and remains accurate down to 1.15 bits, reducing memory usage by up to 50%. Our results highlight the potential of ultra-low-bit LoRA fine-tuning for resource-constrained environments.
APA
Zhou, Z., Zhang, Q., Kumbong, H. & Olukotun, K.. (2025). LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:79570-79594 Available from https://proceedings.mlr.press/v267/zhou25ak.html.

Related Material