Debiasing Model Updates for Improving Personalized Federated Training

Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:21-31, 2021.

Abstract

We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-acar21a, title = {Debiasing Model Updates for Improving Personalized Federated Training}, author = {Acar, Durmus Alp Emre and Zhao, Yue and Zhu, Ruizhao and Matas, Ramon and Mattina, Matthew and Whatmough, Paul and Saligrama, Venkatesh}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {21--31}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/acar21a/acar21a.pdf}, url = {https://proceedings.mlr.press/v139/acar21a.html}, abstract = {We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.} }
Endnote
%0 Conference Paper %T Debiasing Model Updates for Improving Personalized Federated Training %A Durmus Alp Emre Acar %A Yue Zhao %A Ruizhao Zhu %A Ramon Matas %A Matthew Mattina %A Paul Whatmough %A Venkatesh Saligrama %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-acar21a %I PMLR %P 21--31 %U https://proceedings.mlr.press/v139/acar21a.html %V 139 %X We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.
APA
Acar, D.A.E., Zhao, Y., Zhu, R., Matas, R., Mattina, M., Whatmough, P. & Saligrama, V.. (2021). Debiasing Model Updates for Improving Personalized Federated Training. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:21-31 Available from https://proceedings.mlr.press/v139/acar21a.html.

Related Material