Do Global and Local Perform Cooperatively or Adversarially in Heterogeneous Federated Learning?

Huiwen Wu, Shuo Zhang
Conference on Parsimony and Learning, PMLR 280:237-254, 2025.

Abstract

Heterogeneous federated learning (Hetero-FL) is an emerging machine learning framework that enables the training of collaborative models between devices with varying capabilities and data without sharing raw data. In HFL, there are two types of trainer that exhibit distinct behaviors: the Global Trainer (GTr), which prioritizes average performance while lacking fine-grained client insights; the Local Trainer (LTr), which addresses local issues and excels in local data, but struggles with generalization. Thus, it is crucial to combine them, obtaining an admired GTr. Unlike the prevalent personalization strategies that supplement GTr with LTr, our work introduces a novel approach in which GTr and LTr collaborate adversarially. The adversarial performance of the local trainer can unexpectedly enhance the overall performance of GTr in the combined global-local training process. Building on a profound understanding of this adversarial cooperation, we propose an alternating training strategy named Fed A(dversarial) B(ased) (C)ooperation (FedABC), utilizing a "G-L-G-L" framework. LTr increases the global loss, preventing GTr from falling at local minimum points. Our comprehensive experiments show superior accuracy, up to 13.77%, and faster convergence than existing state-of-the-art Hetero-FL methods. We validate the effectiveness and efficiency of our approach in terms of fairness, generalizability, and long-term behavior. Ultimately, our proposed method underscores the design of the training strategy of the Hetero-FL model, emphasizing adversarial cooperation between GTr and LTr in real-world scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v280-wu25a, title = {Do Global and Local Perform Cooperatively or Adversarially in Heterogeneous Federated Learning?}, author = {Wu, Huiwen and Zhang, Shuo}, booktitle = {Conference on Parsimony and Learning}, pages = {237--254}, year = {2025}, editor = {Chen, Beidi and Liu, Shijia and Pilanci, Mert and Su, Weijie and Sulam, Jeremias and Wang, Yuxiang and Zhu, Zhihui}, volume = {280}, series = {Proceedings of Machine Learning Research}, month = {24--27 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v280/main/assets/wu25a/wu25a.pdf}, url = {https://proceedings.mlr.press/v280/wu25a.html}, abstract = {Heterogeneous federated learning (Hetero-FL) is an emerging machine learning framework that enables the training of collaborative models between devices with varying capabilities and data without sharing raw data. In HFL, there are two types of trainer that exhibit distinct behaviors: the Global Trainer (GTr), which prioritizes average performance while lacking fine-grained client insights; the Local Trainer (LTr), which addresses local issues and excels in local data, but struggles with generalization. Thus, it is crucial to combine them, obtaining an admired GTr. Unlike the prevalent personalization strategies that supplement GTr with LTr, our work introduces a novel approach in which GTr and LTr collaborate adversarially. The adversarial performance of the local trainer can unexpectedly enhance the overall performance of GTr in the combined global-local training process. Building on a profound understanding of this adversarial cooperation, we propose an alternating training strategy named Fed A(dversarial) B(ased) (C)ooperation (FedABC), utilizing a "G-L-G-L" framework. LTr increases the global loss, preventing GTr from falling at local minimum points. Our comprehensive experiments show superior accuracy, up to 13.77%, and faster convergence than existing state-of-the-art Hetero-FL methods. We validate the effectiveness and efficiency of our approach in terms of fairness, generalizability, and long-term behavior. Ultimately, our proposed method underscores the design of the training strategy of the Hetero-FL model, emphasizing adversarial cooperation between GTr and LTr in real-world scenarios.} }
Endnote
%0 Conference Paper %T Do Global and Local Perform Cooperatively or Adversarially in Heterogeneous Federated Learning? %A Huiwen Wu %A Shuo Zhang %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2025 %E Beidi Chen %E Shijia Liu %E Mert Pilanci %E Weijie Su %E Jeremias Sulam %E Yuxiang Wang %E Zhihui Zhu %F pmlr-v280-wu25a %I PMLR %P 237--254 %U https://proceedings.mlr.press/v280/wu25a.html %V 280 %X Heterogeneous federated learning (Hetero-FL) is an emerging machine learning framework that enables the training of collaborative models between devices with varying capabilities and data without sharing raw data. In HFL, there are two types of trainer that exhibit distinct behaviors: the Global Trainer (GTr), which prioritizes average performance while lacking fine-grained client insights; the Local Trainer (LTr), which addresses local issues and excels in local data, but struggles with generalization. Thus, it is crucial to combine them, obtaining an admired GTr. Unlike the prevalent personalization strategies that supplement GTr with LTr, our work introduces a novel approach in which GTr and LTr collaborate adversarially. The adversarial performance of the local trainer can unexpectedly enhance the overall performance of GTr in the combined global-local training process. Building on a profound understanding of this adversarial cooperation, we propose an alternating training strategy named Fed A(dversarial) B(ased) (C)ooperation (FedABC), utilizing a "G-L-G-L" framework. LTr increases the global loss, preventing GTr from falling at local minimum points. Our comprehensive experiments show superior accuracy, up to 13.77%, and faster convergence than existing state-of-the-art Hetero-FL methods. We validate the effectiveness and efficiency of our approach in terms of fairness, generalizability, and long-term behavior. Ultimately, our proposed method underscores the design of the training strategy of the Hetero-FL model, emphasizing adversarial cooperation between GTr and LTr in real-world scenarios.
APA
Wu, H. & Zhang, S.. (2025). Do Global and Local Perform Cooperatively or Adversarially in Heterogeneous Federated Learning?. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 280:237-254 Available from https://proceedings.mlr.press/v280/wu25a.html.

Related Material