Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models

Farzad Farhadzadeh, Debasmit Das, Shubhankar Borse, Fatih Porikli
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:16144-16160, 2025.

Abstract

We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model’s weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-farhadzadeh25a, title = {Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models}, author = {Farhadzadeh, Farzad and Das, Debasmit and Borse, Shubhankar and Porikli, Fatih}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {16144--16160}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/farhadzadeh25a/farhadzadeh25a.pdf}, url = {https://proceedings.mlr.press/v267/farhadzadeh25a.html}, abstract = {We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model’s weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.} }
Endnote
%0 Conference Paper %T Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models %A Farzad Farhadzadeh %A Debasmit Das %A Shubhankar Borse %A Fatih Porikli %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-farhadzadeh25a %I PMLR %P 16144--16160 %U https://proceedings.mlr.press/v267/farhadzadeh25a.html %V 267 %X We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model’s weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.
APA
Farhadzadeh, F., Das, D., Borse, S. & Porikli, F.. (2025). Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:16144-16160 Available from https://proceedings.mlr.press/v267/farhadzadeh25a.html.

Related Material