Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring

Zhengquan Luo, Yunlong Wang, Zilei Wang, Zhenan Sun, Tieniu Tan
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:14527-14541, 2022.

Abstract

Attributes skew hinders the current federated learning (FL) frameworks from consistent optimization directions among the clients, which inevitably leads to performance reduction and unstable convergence. The core problems lie in that: 1) Domain-specific attributes, which are non-causal and only locally valid, are indeliberately mixed into global aggregation. 2) The one-stage optimizations of entangled attributes cannot simultaneously satisfy two conflicting objectives, i.e., generalization and personalization. To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed alternating local-global optimization independently. Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of FL. Extensive experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods on both manually synthesized and realistic attributes skew datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-luo22b, title = {Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring}, author = {Luo, Zhengquan and Wang, Yunlong and Wang, Zilei and Sun, Zhenan and Tan, Tieniu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {14527--14541}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/luo22b/luo22b.pdf}, url = {https://proceedings.mlr.press/v162/luo22b.html}, abstract = {Attributes skew hinders the current federated learning (FL) frameworks from consistent optimization directions among the clients, which inevitably leads to performance reduction and unstable convergence. The core problems lie in that: 1) Domain-specific attributes, which are non-causal and only locally valid, are indeliberately mixed into global aggregation. 2) The one-stage optimizations of entangled attributes cannot simultaneously satisfy two conflicting objectives, i.e., generalization and personalization. To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed alternating local-global optimization independently. Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of FL. Extensive experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods on both manually synthesized and realistic attributes skew datasets.} }
Endnote
%0 Conference Paper %T Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring %A Zhengquan Luo %A Yunlong Wang %A Zilei Wang %A Zhenan Sun %A Tieniu Tan %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-luo22b %I PMLR %P 14527--14541 %U https://proceedings.mlr.press/v162/luo22b.html %V 162 %X Attributes skew hinders the current federated learning (FL) frameworks from consistent optimization directions among the clients, which inevitably leads to performance reduction and unstable convergence. The core problems lie in that: 1) Domain-specific attributes, which are non-causal and only locally valid, are indeliberately mixed into global aggregation. 2) The one-stage optimizations of entangled attributes cannot simultaneously satisfy two conflicting objectives, i.e., generalization and personalization. To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed alternating local-global optimization independently. Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of FL. Extensive experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods on both manually synthesized and realistic attributes skew datasets.
APA
Luo, Z., Wang, Y., Wang, Z., Sun, Z. & Tan, T.. (2022). Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:14527-14541 Available from https://proceedings.mlr.press/v162/luo22b.html.

Related Material