Weight Weaving: Parameter Pooling for Data-Free Model Merging

Levy Chaves, Eduardo Valle, Sandra Avila
Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models, PMLR 322:317-329, 2026.

Abstract

Model merging provides cost-effective, data-efficient combination of specialized deep neural networks through parameter integration. This technique exploits specialist model strengths across downstream tasks without retraining. Most model merging approaches critically depend on scaling hyper-parameters $\lambda$, weighting each model’s contribution globally or individually. Principled approaches for setting scaling factors without accessing any data (data-free) are scarce, often leading researchers to tune $\lambda$ using privileged data from the evaluation set, which is obviously unfeasible in practice. To address this limitation, we introduce Weight Weaving, a plug-and-play technique that pools model weights across $\lambda$ values search space using user-defined pooling functions, such as averaging, random selection, or even existing model merging methods. Our method demonstrates high modularity, imposing minimal search space constraints. It operates orthogonally to existing model merging methods and eliminates evaluation data requirements. We validate Weight Weaving across three ViT variants in three experimental setups: vision multi-task learning, vision continual learning, and domain generalization. Our method consistently improves the performance of several model merging methods, achieving average accuracy gains of up to 15.9 percentage points in a data-free setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v322-chaves26a, title = {Weight Weaving: Parameter Pooling for Data-Free Model Merging}, author = {Chaves, Levy and Valle, Eduardo and Avila, Sandra}, booktitle = {Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models}, pages = {317--329}, year = {2026}, editor = {Fumero, Marco and Domine, Clementine and L"ahner, Zorah and Cannistraci, Irene and Zhao, Bo and Williams, Alex}, volume = {322}, series = {Proceedings of Machine Learning Research}, month = {06 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v322/main/assets/chaves26a/chaves26a.pdf}, url = {https://proceedings.mlr.press/v322/chaves26a.html}, abstract = {Model merging provides cost-effective, data-efficient combination of specialized deep neural networks through parameter integration. This technique exploits specialist model strengths across downstream tasks without retraining. Most model merging approaches critically depend on scaling hyper-parameters $\lambda$, weighting each model’s contribution globally or individually. Principled approaches for setting scaling factors without accessing any data (data-free) are scarce, often leading researchers to tune $\lambda$ using privileged data from the evaluation set, which is obviously unfeasible in practice. To address this limitation, we introduce Weight Weaving, a plug-and-play technique that pools model weights across $\lambda$ values search space using user-defined pooling functions, such as averaging, random selection, or even existing model merging methods. Our method demonstrates high modularity, imposing minimal search space constraints. It operates orthogonally to existing model merging methods and eliminates evaluation data requirements. We validate Weight Weaving across three ViT variants in three experimental setups: vision multi-task learning, vision continual learning, and domain generalization. Our method consistently improves the performance of several model merging methods, achieving average accuracy gains of up to 15.9 percentage points in a data-free setting.} }
Endnote
%0 Conference Paper %T Weight Weaving: Parameter Pooling for Data-Free Model Merging %A Levy Chaves %A Eduardo Valle %A Sandra Avila %B Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2026 %E Marco Fumero %E Clementine Domine %E Zorah L"ahner %E Irene Cannistraci %E Bo Zhao %E Alex Williams %F pmlr-v322-chaves26a %I PMLR %P 317--329 %U https://proceedings.mlr.press/v322/chaves26a.html %V 322 %X Model merging provides cost-effective, data-efficient combination of specialized deep neural networks through parameter integration. This technique exploits specialist model strengths across downstream tasks without retraining. Most model merging approaches critically depend on scaling hyper-parameters $\lambda$, weighting each model’s contribution globally or individually. Principled approaches for setting scaling factors without accessing any data (data-free) are scarce, often leading researchers to tune $\lambda$ using privileged data from the evaluation set, which is obviously unfeasible in practice. To address this limitation, we introduce Weight Weaving, a plug-and-play technique that pools model weights across $\lambda$ values search space using user-defined pooling functions, such as averaging, random selection, or even existing model merging methods. Our method demonstrates high modularity, imposing minimal search space constraints. It operates orthogonally to existing model merging methods and eliminates evaluation data requirements. We validate Weight Weaving across three ViT variants in three experimental setups: vision multi-task learning, vision continual learning, and domain generalization. Our method consistently improves the performance of several model merging methods, achieving average accuracy gains of up to 15.9 percentage points in a data-free setting.
APA
Chaves, L., Valle, E. & Avila, S.. (2026). Weight Weaving: Parameter Pooling for Data-Free Model Merging. Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 322:317-329 Available from https://proceedings.mlr.press/v322/chaves26a.html.

Related Material