Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead

Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18062-18095, 2025.

Abstract

Fine-tuning large language models (LLMs) with low-rank adaptations (LoRAs) has become common practice, often yielding numerous copies of the same LLM differing only in their LoRA updates. This paradigm presents challenges for systems that serve real-time responses to queries that each involve a different LoRA. Prior works optimize the design of such systems but still require continuous loading and offloading of LoRAs, as it is infeasible to store thousands of LoRAs in GPU memory. To mitigate this issue, we investigate the efficacy of compression when serving LoRAs. We propose a method for the joint compression of LoRAs into a shared basis paired with LoRA-specific scaling matrices. We extend our algorithm to learn clusters of LoRAs that are amenable to joint compression, allowing it to scale gracefully to large LoRA collections. Our experiments with up to 1000 LoRAs demonstrate that compressed LoRAs preserve performance while offering major throughput gains in realistic serving scenarios with over a thousand LoRAs, maintaining 80% of the throughput of serving a single LoRA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gabrielsson25a, title = {Compress then Serve: Serving Thousands of {L}o{RA} Adapters with Little Overhead}, author = {Gabrielsson, Rickard Br\"{u}el and Zhu, Jiacheng and Bhardwaj, Onkar and Choshen, Leshem and Greenewald, Kristjan and Yurochkin, Mikhail and Solomon, Justin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18062--18095}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gabrielsson25a/gabrielsson25a.pdf}, url = {https://proceedings.mlr.press/v267/gabrielsson25a.html}, abstract = {Fine-tuning large language models (LLMs) with low-rank adaptations (LoRAs) has become common practice, often yielding numerous copies of the same LLM differing only in their LoRA updates. This paradigm presents challenges for systems that serve real-time responses to queries that each involve a different LoRA. Prior works optimize the design of such systems but still require continuous loading and offloading of LoRAs, as it is infeasible to store thousands of LoRAs in GPU memory. To mitigate this issue, we investigate the efficacy of compression when serving LoRAs. We propose a method for the joint compression of LoRAs into a shared basis paired with LoRA-specific scaling matrices. We extend our algorithm to learn clusters of LoRAs that are amenable to joint compression, allowing it to scale gracefully to large LoRA collections. Our experiments with up to 1000 LoRAs demonstrate that compressed LoRAs preserve performance while offering major throughput gains in realistic serving scenarios with over a thousand LoRAs, maintaining 80% of the throughput of serving a single LoRA.} }
Endnote
%0 Conference Paper %T Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead %A Rickard Brüel Gabrielsson %A Jiacheng Zhu %A Onkar Bhardwaj %A Leshem Choshen %A Kristjan Greenewald %A Mikhail Yurochkin %A Justin Solomon %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gabrielsson25a %I PMLR %P 18062--18095 %U https://proceedings.mlr.press/v267/gabrielsson25a.html %V 267 %X Fine-tuning large language models (LLMs) with low-rank adaptations (LoRAs) has become common practice, often yielding numerous copies of the same LLM differing only in their LoRA updates. This paradigm presents challenges for systems that serve real-time responses to queries that each involve a different LoRA. Prior works optimize the design of such systems but still require continuous loading and offloading of LoRAs, as it is infeasible to store thousands of LoRAs in GPU memory. To mitigate this issue, we investigate the efficacy of compression when serving LoRAs. We propose a method for the joint compression of LoRAs into a shared basis paired with LoRA-specific scaling matrices. We extend our algorithm to learn clusters of LoRAs that are amenable to joint compression, allowing it to scale gracefully to large LoRA collections. Our experiments with up to 1000 LoRAs demonstrate that compressed LoRAs preserve performance while offering major throughput gains in realistic serving scenarios with over a thousand LoRAs, maintaining 80% of the throughput of serving a single LoRA.
APA
Gabrielsson, R.B., Zhu, J., Bhardwaj, O., Choshen, L., Greenewald, K., Yurochkin, M. & Solomon, J.. (2025). Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18062-18095 Available from https://proceedings.mlr.press/v267/gabrielsson25a.html.

Related Material