Adversarial Collaborative Learning on Non-IID Features

Qinbin Li, Bingsheng He, Dawn Song
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:19504-19526, 2023.

Abstract

Federated Learning (FL) has been a popular approach to enable collaborative learning on multiple parties without exchanging raw data. However, the model performance of FL may degrade a lot due to non-IID data. While many FL algorithms focus on non-IID labels, FL on non-IID features has largely been overlooked. Different from typical FL approaches, the paper proposes a new learning concept called ADCOL (Adversarial Collaborative Learning) for non-IID features. Instead of adopting the widely used model-averaging scheme, ADCOL conducts training in an adversarial way: the server aims to train a discriminator to distinguish the representations of the parties, while the parties aim to generate a common representation distribution. Our experiments show that ADCOL achieves better performance than state-of-the-art FL algorithms on non-IID features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-li23j, title = {Adversarial Collaborative Learning on Non-{IID} Features}, author = {Li, Qinbin and He, Bingsheng and Song, Dawn}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {19504--19526}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/li23j/li23j.pdf}, url = {https://proceedings.mlr.press/v202/li23j.html}, abstract = {Federated Learning (FL) has been a popular approach to enable collaborative learning on multiple parties without exchanging raw data. However, the model performance of FL may degrade a lot due to non-IID data. While many FL algorithms focus on non-IID labels, FL on non-IID features has largely been overlooked. Different from typical FL approaches, the paper proposes a new learning concept called ADCOL (Adversarial Collaborative Learning) for non-IID features. Instead of adopting the widely used model-averaging scheme, ADCOL conducts training in an adversarial way: the server aims to train a discriminator to distinguish the representations of the parties, while the parties aim to generate a common representation distribution. Our experiments show that ADCOL achieves better performance than state-of-the-art FL algorithms on non-IID features.} }
Endnote
%0 Conference Paper %T Adversarial Collaborative Learning on Non-IID Features %A Qinbin Li %A Bingsheng He %A Dawn Song %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-li23j %I PMLR %P 19504--19526 %U https://proceedings.mlr.press/v202/li23j.html %V 202 %X Federated Learning (FL) has been a popular approach to enable collaborative learning on multiple parties without exchanging raw data. However, the model performance of FL may degrade a lot due to non-IID data. While many FL algorithms focus on non-IID labels, FL on non-IID features has largely been overlooked. Different from typical FL approaches, the paper proposes a new learning concept called ADCOL (Adversarial Collaborative Learning) for non-IID features. Instead of adopting the widely used model-averaging scheme, ADCOL conducts training in an adversarial way: the server aims to train a discriminator to distinguish the representations of the parties, while the parties aim to generate a common representation distribution. Our experiments show that ADCOL achieves better performance than state-of-the-art FL algorithms on non-IID features.
APA
Li, Q., He, B. & Song, D.. (2023). Adversarial Collaborative Learning on Non-IID Features. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:19504-19526 Available from https://proceedings.mlr.press/v202/li23j.html.

Related Material