Robust Learning from Discriminative Feature Feedback

Sanjoy Dasgupta, Sivan Sabato
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:973-982, 2020.

Abstract

Recent work introduced the model of "learning from discriminative feature feedback", in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be intractable. However, these results all relied upon *perfect* annotator feedback. In this paper, we introduce a more realistic, *robust* version of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect annotator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-dasgupta20a, title = {Robust Learning from Discriminative Feature Feedback}, author = {Dasgupta, Sanjoy and Sabato, Sivan}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {973--982}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/dasgupta20a/dasgupta20a.pdf}, url = {https://proceedings.mlr.press/v108/dasgupta20a.html}, abstract = {Recent work introduced the model of "learning from discriminative feature feedback", in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be intractable. However, these results all relied upon *perfect* annotator feedback. In this paper, we introduce a more realistic, *robust* version of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect annotator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.} }
Endnote
%0 Conference Paper %T Robust Learning from Discriminative Feature Feedback %A Sanjoy Dasgupta %A Sivan Sabato %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-dasgupta20a %I PMLR %P 973--982 %U https://proceedings.mlr.press/v108/dasgupta20a.html %V 108 %X Recent work introduced the model of "learning from discriminative feature feedback", in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be intractable. However, these results all relied upon *perfect* annotator feedback. In this paper, we introduce a more realistic, *robust* version of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect annotator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.
APA
Dasgupta, S. & Sabato, S.. (2020). Robust Learning from Discriminative Feature Feedback. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:973-982 Available from https://proceedings.mlr.press/v108/dasgupta20a.html.

Related Material