Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Steven Wu, Jinfeng Yi
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26048-26067, 2022.

Abstract

Providing privacy protection has been one of the primary motivations of Federated Learning (FL). Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL. To guarantee the client-level differential privacy in FL algorithms, the clients’ transmitted model updates have to be clipped before adding privacy noise. Such clipping operation is substantially different from its counterpart of gradient clipping in the centralized differentially private SGD and has not been well-understood. In this paper, we first empirically demonstrate that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity when training neural networks, which is partly because the clients’ updates become similar for several popular deep architectures. Based on this key observation, we provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients’ updates. To the best of our knowledge, this is the first work that rigorously investigates theoretical and empirical issues regarding the clipping operation in FL algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhang22b, title = {Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy}, author = {Zhang, Xinwei and Chen, Xiangyi and Hong, Mingyi and Wu, Steven and Yi, Jinfeng}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26048--26067}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhang22b/zhang22b.pdf}, url = {https://proceedings.mlr.press/v162/zhang22b.html}, abstract = {Providing privacy protection has been one of the primary motivations of Federated Learning (FL). Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL. To guarantee the client-level differential privacy in FL algorithms, the clients’ transmitted model updates have to be clipped before adding privacy noise. Such clipping operation is substantially different from its counterpart of gradient clipping in the centralized differentially private SGD and has not been well-understood. In this paper, we first empirically demonstrate that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity when training neural networks, which is partly because the clients’ updates become similar for several popular deep architectures. Based on this key observation, we provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients’ updates. To the best of our knowledge, this is the first work that rigorously investigates theoretical and empirical issues regarding the clipping operation in FL algorithms.} }
Endnote
%0 Conference Paper %T Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy %A Xinwei Zhang %A Xiangyi Chen %A Mingyi Hong %A Steven Wu %A Jinfeng Yi %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhang22b %I PMLR %P 26048--26067 %U https://proceedings.mlr.press/v162/zhang22b.html %V 162 %X Providing privacy protection has been one of the primary motivations of Federated Learning (FL). Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL. To guarantee the client-level differential privacy in FL algorithms, the clients’ transmitted model updates have to be clipped before adding privacy noise. Such clipping operation is substantially different from its counterpart of gradient clipping in the centralized differentially private SGD and has not been well-understood. In this paper, we first empirically demonstrate that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity when training neural networks, which is partly because the clients’ updates become similar for several popular deep architectures. Based on this key observation, we provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients’ updates. To the best of our knowledge, this is the first work that rigorously investigates theoretical and empirical issues regarding the clipping operation in FL algorithms.
APA
Zhang, X., Chen, X., Hong, M., Wu, S. & Yi, J.. (2022). Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26048-26067 Available from https://proceedings.mlr.press/v162/zhang22b.html.

Related Material