Robust Learning from Untrusted Sources

Nikola Konstantinov, Christoph Lampert
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3488-3498, 2019.

Abstract

Modern machine learning methods often require more data for training than a single expert can provide. Therefore, it has become a standard procedure to collect data from multiple external sources, \eg via crowdsourcing. Unfortunately, the quality of these sources is not always guaranteed. As further complications, the data might be stored in a distributed way, or might even have to remain private. In this work, we address the question of how to learn robustly in such scenarios. Studying the problem through the lens of statistical learning theory, we derive a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data. We show by extensive experiments that our method provides significant improvements over alternative approaches from robust statistics and distributed optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-konstantinov19a, title = {Robust Learning from Untrusted Sources}, author = {Konstantinov, Nikola and Lampert, Christoph}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3488--3498}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/konstantinov19a/konstantinov19a.pdf}, url = {https://proceedings.mlr.press/v97/konstantinov19a.html}, abstract = {Modern machine learning methods often require more data for training than a single expert can provide. Therefore, it has become a standard procedure to collect data from multiple external sources, \eg via crowdsourcing. Unfortunately, the quality of these sources is not always guaranteed. As further complications, the data might be stored in a distributed way, or might even have to remain private. In this work, we address the question of how to learn robustly in such scenarios. Studying the problem through the lens of statistical learning theory, we derive a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data. We show by extensive experiments that our method provides significant improvements over alternative approaches from robust statistics and distributed optimization.} }
Endnote
%0 Conference Paper %T Robust Learning from Untrusted Sources %A Nikola Konstantinov %A Christoph Lampert %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-konstantinov19a %I PMLR %P 3488--3498 %U https://proceedings.mlr.press/v97/konstantinov19a.html %V 97 %X Modern machine learning methods often require more data for training than a single expert can provide. Therefore, it has become a standard procedure to collect data from multiple external sources, \eg via crowdsourcing. Unfortunately, the quality of these sources is not always guaranteed. As further complications, the data might be stored in a distributed way, or might even have to remain private. In this work, we address the question of how to learn robustly in such scenarios. Studying the problem through the lens of statistical learning theory, we derive a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data. We show by extensive experiments that our method provides significant improvements over alternative approaches from robust statistics and distributed optimization.
APA
Konstantinov, N. & Lampert, C.. (2019). Robust Learning from Untrusted Sources. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3488-3498 Available from https://proceedings.mlr.press/v97/konstantinov19a.html.

Related Material