On the Sample Complexity of Adversarial Multi-Source PAC Learning

Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph Lampert
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5416-5425, 2020.

Abstract

We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability, that is, even in the limit of infinitely much training data, no learning system can approach the optimal test error. In this work we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources. Our main results are a generalization bound that provides finite-sample guarantees for this learning setting, as well as corresponding lower bounds. Besides establishing PAC-learnability our results also show that in a cooperative learning setting sharing data with other parties has provable benefits, even if some participants are malicious.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-konstantinov20a, title = {On the Sample Complexity of Adversarial Multi-Source {PAC} Learning}, author = {Konstantinov, Nikola and Frantar, Elias and Alistarh, Dan and Lampert, Christoph}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5416--5425}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/konstantinov20a/konstantinov20a.pdf}, url = {https://proceedings.mlr.press/v119/konstantinov20a.html}, abstract = {We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability, that is, even in the limit of infinitely much training data, no learning system can approach the optimal test error. In this work we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources. Our main results are a generalization bound that provides finite-sample guarantees for this learning setting, as well as corresponding lower bounds. Besides establishing PAC-learnability our results also show that in a cooperative learning setting sharing data with other parties has provable benefits, even if some participants are malicious.} }
Endnote
%0 Conference Paper %T On the Sample Complexity of Adversarial Multi-Source PAC Learning %A Nikola Konstantinov %A Elias Frantar %A Dan Alistarh %A Christoph Lampert %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-konstantinov20a %I PMLR %P 5416--5425 %U https://proceedings.mlr.press/v119/konstantinov20a.html %V 119 %X We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability, that is, even in the limit of infinitely much training data, no learning system can approach the optimal test error. In this work we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources. Our main results are a generalization bound that provides finite-sample guarantees for this learning setting, as well as corresponding lower bounds. Besides establishing PAC-learnability our results also show that in a cooperative learning setting sharing data with other parties has provable benefits, even if some participants are malicious.
APA
Konstantinov, N., Frantar, E., Alistarh, D. & Lampert, C.. (2020). On the Sample Complexity of Adversarial Multi-Source PAC Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5416-5425 Available from https://proceedings.mlr.press/v119/konstantinov20a.html.

Related Material