[edit]
Do Outliers Ruin Collaboration?
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4180-4187, 2018.
Abstract
We consider the problem of learning a binary classifier from n different data sources, among which at most an η fraction are adversarial. The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. We present an algorithm that achieves an O(ηn+lnn) overhead, which is proved to be worst-case optimal. We also discuss the potential challenges to the design of a computationally efficient learning algorithm with a small overhead.