Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks

Nurbek Tastan, Samuel Horváth, Karthik Nandakumar
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:59210-59236, 2025.

Abstract

Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-tastan25a, title = {Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks}, author = {Tastan, Nurbek and Horv\'{a}th, Samuel and Nandakumar, Karthik}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {59210--59236}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/tastan25a/tastan25a.pdf}, url = {https://proceedings.mlr.press/v267/tastan25a.html}, abstract = {Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.} }
Endnote
%0 Conference Paper %T Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks %A Nurbek Tastan %A Samuel Horváth %A Karthik Nandakumar %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-tastan25a %I PMLR %P 59210--59236 %U https://proceedings.mlr.press/v267/tastan25a.html %V 267 %X Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.
APA
Tastan, N., Horváth, S. & Nandakumar, K.. (2025). Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:59210-59236 Available from https://proceedings.mlr.press/v267/tastan25a.html.

Related Material