DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training

Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, Dacheng Tao
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:4587-4604, 2022.

Abstract

Personalized federated learning is proposed to handle the data heterogeneity problem amongst clients by learning dedicated tailored local models for each user. However, existing works are often built in a centralized way, leading to high communication pressure and high vulnerability when a failure or an attack on the central server occurs. In this work, we propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named DisPFL, which employs personalized sparse masks to customize sparse local models on the edge. To further save the communication and computation cost, we propose a decentralized sparse training technique, which means that each local model in DisPFL only maintains a fixed number of active parameters throughout the whole local training and peer-to-peer communication process. Comprehensive experiments demonstrate that DisPFL significantly saves the communication bottleneck for the busiest node among all clients and, at the same time, achieves higher model accuracy with less computation cost and communication rounds. Furthermore, we demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities and achieves better personalized performances.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-dai22b, title = {{D}is{PFL}: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training}, author = {Dai, Rong and Shen, Li and He, Fengxiang and Tian, Xinmei and Tao, Dacheng}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {4587--4604}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/dai22b/dai22b.pdf}, url = {https://proceedings.mlr.press/v162/dai22b.html}, abstract = {Personalized federated learning is proposed to handle the data heterogeneity problem amongst clients by learning dedicated tailored local models for each user. However, existing works are often built in a centralized way, leading to high communication pressure and high vulnerability when a failure or an attack on the central server occurs. In this work, we propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named DisPFL, which employs personalized sparse masks to customize sparse local models on the edge. To further save the communication and computation cost, we propose a decentralized sparse training technique, which means that each local model in DisPFL only maintains a fixed number of active parameters throughout the whole local training and peer-to-peer communication process. Comprehensive experiments demonstrate that DisPFL significantly saves the communication bottleneck for the busiest node among all clients and, at the same time, achieves higher model accuracy with less computation cost and communication rounds. Furthermore, we demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities and achieves better personalized performances.} }
Endnote
%0 Conference Paper %T DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training %A Rong Dai %A Li Shen %A Fengxiang He %A Xinmei Tian %A Dacheng Tao %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-dai22b %I PMLR %P 4587--4604 %U https://proceedings.mlr.press/v162/dai22b.html %V 162 %X Personalized federated learning is proposed to handle the data heterogeneity problem amongst clients by learning dedicated tailored local models for each user. However, existing works are often built in a centralized way, leading to high communication pressure and high vulnerability when a failure or an attack on the central server occurs. In this work, we propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named DisPFL, which employs personalized sparse masks to customize sparse local models on the edge. To further save the communication and computation cost, we propose a decentralized sparse training technique, which means that each local model in DisPFL only maintains a fixed number of active parameters throughout the whole local training and peer-to-peer communication process. Comprehensive experiments demonstrate that DisPFL significantly saves the communication bottleneck for the busiest node among all clients and, at the same time, achieves higher model accuracy with less computation cost and communication rounds. Furthermore, we demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities and achieves better personalized performances.
APA
Dai, R., Shen, L., He, F., Tian, X. & Tao, D.. (2022). DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:4587-4604 Available from https://proceedings.mlr.press/v162/dai22b.html.

Related Material