Crowdsourcing with Arbitrary Adversaries
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:27082717, 2018.
Abstract
Most existing works on crowdsourcing assume that the workers follow the DawidSkene model, or the onecoin model as its special case, where every worker makes mistakes independently of other workers and with the same error probability for every task. We study a significant extension of this restricted model. We allow almost half of the workers to deviate from the onecoin model and for those workers, their probabilities of making an error to be taskdependent and to be arbitrarily correlated. In other words, we allow for arbitrary adversaries, for which not only error probabilities can be high, but which can also perfectly collude. In this adversarial scenario, we design an efficient algorithm to consistently estimate the workers’ error probabilities.
Related Material


