Can we Generalize and Distribute Private Representation Learning?

Sheikh Shams Azam, Taejin Kim, Seyyedali Hosseinalipour, Carlee Joe-Wong, Saurabh Bagchi, Christopher Brinton
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:11320-11340, 2022.

Abstract

We study the problem of learning representations that are private yet informative i.e., provide information about intended "ally" targets while hiding sensitive "adversary" attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes unlike existing PRL solutions. While centrally-aggregated dataset is a prerequisite for most PRL techniques, data in real-world is often siloed across multiple distributed nodes unwilling to share the raw data because of privacy concerns. We address this practical constraint by developing D-EIGAN, the first distributed PRL method that learns representations at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin ($47%$ improvement), and D-EIGAN’s performance is consistently on par with EIGAN under different network settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-shams-azam22a, title = { Can we Generalize and Distribute Private Representation Learning? }, author = {Shams Azam, Sheikh and Kim, Taejin and Hosseinalipour, Seyyedali and Joe-Wong, Carlee and Bagchi, Saurabh and Brinton, Christopher}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {11320--11340}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/shams-azam22a/shams-azam22a.pdf}, url = {https://proceedings.mlr.press/v151/shams-azam22a.html}, abstract = { We study the problem of learning representations that are private yet informative i.e., provide information about intended "ally" targets while hiding sensitive "adversary" attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes unlike existing PRL solutions. While centrally-aggregated dataset is a prerequisite for most PRL techniques, data in real-world is often siloed across multiple distributed nodes unwilling to share the raw data because of privacy concerns. We address this practical constraint by developing D-EIGAN, the first distributed PRL method that learns representations at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin ($47%$ improvement), and D-EIGAN’s performance is consistently on par with EIGAN under different network settings. } }
Endnote
%0 Conference Paper %T Can we Generalize and Distribute Private Representation Learning? %A Sheikh Shams Azam %A Taejin Kim %A Seyyedali Hosseinalipour %A Carlee Joe-Wong %A Saurabh Bagchi %A Christopher Brinton %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-shams-azam22a %I PMLR %P 11320--11340 %U https://proceedings.mlr.press/v151/shams-azam22a.html %V 151 %X We study the problem of learning representations that are private yet informative i.e., provide information about intended "ally" targets while hiding sensitive "adversary" attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes unlike existing PRL solutions. While centrally-aggregated dataset is a prerequisite for most PRL techniques, data in real-world is often siloed across multiple distributed nodes unwilling to share the raw data because of privacy concerns. We address this practical constraint by developing D-EIGAN, the first distributed PRL method that learns representations at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin ($47%$ improvement), and D-EIGAN’s performance is consistently on par with EIGAN under different network settings.
APA
Shams Azam, S., Kim, T., Hosseinalipour, S., Joe-Wong, C., Bagchi, S. & Brinton, C.. (2022). Can we Generalize and Distribute Private Representation Learning? . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:11320-11340 Available from https://proceedings.mlr.press/v151/shams-azam22a.html.

Related Material