LoRA-Gen: Specializing Large Language Model via Online LoRA Generation

Yicheng Xiao, Lin Song, Rui Yang, Cheng Cheng, Yixiao Ge, Xiu Li, Ying Shan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:68459-68471, 2025.

Abstract

Recent advances have highlighted the benefits of scaling language models to enhance performance across a wide range of NLP tasks. However, these approaches still face limitations in effectiveness and efficiency when applied to domain-specific tasks, particularly for small edge-side models. We propose the LoRA-Gen framework, which utilizes a large cloud-side model to generate LoRA parameters for edge-side models based on task descriptions. By employing the reparameterization technique, we merge the LoRA parameters into the edge-side model to achieve flexible specialization. Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model by reducing the input context length. Without specialized training, LoRA-Gen outperforms conventional LoRA fine-tuning, which achieves competitive accuracy and a 2.1x speedup with TinyLLaMA-1.1B in reasoning tasks. Besides, our method delivers a compress ratio of 10.1x with Gemma-2B on intelligent agent tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-xiao25e, title = {{L}o{RA}-Gen: Specializing Large Language Model via Online {L}o{RA} Generation}, author = {Xiao, Yicheng and Song, Lin and Yang, Rui and Cheng, Cheng and Ge, Yixiao and Li, Xiu and Shan, Ying}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {68459--68471}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/xiao25e/xiao25e.pdf}, url = {https://proceedings.mlr.press/v267/xiao25e.html}, abstract = {Recent advances have highlighted the benefits of scaling language models to enhance performance across a wide range of NLP tasks. However, these approaches still face limitations in effectiveness and efficiency when applied to domain-specific tasks, particularly for small edge-side models. We propose the LoRA-Gen framework, which utilizes a large cloud-side model to generate LoRA parameters for edge-side models based on task descriptions. By employing the reparameterization technique, we merge the LoRA parameters into the edge-side model to achieve flexible specialization. Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model by reducing the input context length. Without specialized training, LoRA-Gen outperforms conventional LoRA fine-tuning, which achieves competitive accuracy and a 2.1x speedup with TinyLLaMA-1.1B in reasoning tasks. Besides, our method delivers a compress ratio of 10.1x with Gemma-2B on intelligent agent tasks.} }
Endnote
%0 Conference Paper %T LoRA-Gen: Specializing Large Language Model via Online LoRA Generation %A Yicheng Xiao %A Lin Song %A Rui Yang %A Cheng Cheng %A Yixiao Ge %A Xiu Li %A Ying Shan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-xiao25e %I PMLR %P 68459--68471 %U https://proceedings.mlr.press/v267/xiao25e.html %V 267 %X Recent advances have highlighted the benefits of scaling language models to enhance performance across a wide range of NLP tasks. However, these approaches still face limitations in effectiveness and efficiency when applied to domain-specific tasks, particularly for small edge-side models. We propose the LoRA-Gen framework, which utilizes a large cloud-side model to generate LoRA parameters for edge-side models based on task descriptions. By employing the reparameterization technique, we merge the LoRA parameters into the edge-side model to achieve flexible specialization. Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model by reducing the input context length. Without specialized training, LoRA-Gen outperforms conventional LoRA fine-tuning, which achieves competitive accuracy and a 2.1x speedup with TinyLLaMA-1.1B in reasoning tasks. Besides, our method delivers a compress ratio of 10.1x with Gemma-2B on intelligent agent tasks.
APA
Xiao, Y., Song, L., Yang, R., Cheng, C., Ge, Y., Li, X. & Shan, Y.. (2025). LoRA-Gen: Specializing Large Language Model via Online LoRA Generation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:68459-68471 Available from https://proceedings.mlr.press/v267/xiao25e.html.

Related Material