DPFL: Decentralized Personalized Federated Learning

Salma Kharrat, Marco Canini, Samuel Horváth
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5086-5094, 2025.

Abstract

This work addresses the challenges of data heterogeneity and communication constraints in decentralized federated learning (FL). We introduce decentralized personalized FL (DPFL), a bi-level optimization framework that enhances personalized FL by leveraging combinatorial relationships among clients, enabling fine-grained and targeted collaborations. By employing a constrained greedy algorithm, DPFL constructs a collaboration graph that guides clients in choosing suitable collaborators, enabling personalized model training tailored to local data while respecting a fixed and predefined communication and resource budget. Our theoretical analysis demonstrates that the proposed objective for constructing the collaboration graph yields superior or equivalent performance compared to any alternative collaboration structures, including pure local training. Extensive experiments across diverse datasets show that DPFL consistently outperforms existing methods, effectively handling non-IID data, reducing communication overhead, and improving resource efficiency in real-world decentralized FL scenarios. The code can be accessed at: \url{https://github.com/salmakh1/DPFL.}

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kharrat25a, title = {DPFL: Decentralized Personalized Federated Learning}, author = {Kharrat, Salma and Canini, Marco and Horv{\'a}th, Samuel}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5086--5094}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kharrat25a/kharrat25a.pdf}, url = {https://proceedings.mlr.press/v258/kharrat25a.html}, abstract = {This work addresses the challenges of data heterogeneity and communication constraints in decentralized federated learning (FL). We introduce decentralized personalized FL (DPFL), a bi-level optimization framework that enhances personalized FL by leveraging combinatorial relationships among clients, enabling fine-grained and targeted collaborations. By employing a constrained greedy algorithm, DPFL constructs a collaboration graph that guides clients in choosing suitable collaborators, enabling personalized model training tailored to local data while respecting a fixed and predefined communication and resource budget. Our theoretical analysis demonstrates that the proposed objective for constructing the collaboration graph yields superior or equivalent performance compared to any alternative collaboration structures, including pure local training. Extensive experiments across diverse datasets show that DPFL consistently outperforms existing methods, effectively handling non-IID data, reducing communication overhead, and improving resource efficiency in real-world decentralized FL scenarios. The code can be accessed at: \url{https://github.com/salmakh1/DPFL.}} }
Endnote
%0 Conference Paper %T DPFL: Decentralized Personalized Federated Learning %A Salma Kharrat %A Marco Canini %A Samuel Horváth %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kharrat25a %I PMLR %P 5086--5094 %U https://proceedings.mlr.press/v258/kharrat25a.html %V 258 %X This work addresses the challenges of data heterogeneity and communication constraints in decentralized federated learning (FL). We introduce decentralized personalized FL (DPFL), a bi-level optimization framework that enhances personalized FL by leveraging combinatorial relationships among clients, enabling fine-grained and targeted collaborations. By employing a constrained greedy algorithm, DPFL constructs a collaboration graph that guides clients in choosing suitable collaborators, enabling personalized model training tailored to local data while respecting a fixed and predefined communication and resource budget. Our theoretical analysis demonstrates that the proposed objective for constructing the collaboration graph yields superior or equivalent performance compared to any alternative collaboration structures, including pure local training. Extensive experiments across diverse datasets show that DPFL consistently outperforms existing methods, effectively handling non-IID data, reducing communication overhead, and improving resource efficiency in real-world decentralized FL scenarios. The code can be accessed at: \url{https://github.com/salmakh1/DPFL.}
APA
Kharrat, S., Canini, M. & Horváth, S.. (2025). DPFL: Decentralized Personalized Federated Learning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5086-5094 Available from https://proceedings.mlr.press/v258/kharrat25a.html.

Related Material