Sanjoy Dasgupta,
Daniel Hsu,
Stefanos Poulis,
Xiaojin Zhu
;
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1547-1555, 2019.
Abstract
One widely-studied model of teaching calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner’s hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown—that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to shrink training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.
@InProceedings{pmlr-v97-dasgupta19a,
title = {Teaching a black-box learner},
author = {Dasgupta, Sanjoy and Hsu, Daniel and Poulis, Stefanos and Zhu, Xiaojin},
booktitle = {Proceedings of the 36th International Conference on Machine Learning},
pages = {1547--1555},
year = {2019},
editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
volume = {97},
series = {Proceedings of Machine Learning Research},
address = {Long Beach, California, USA},
month = {09--15 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v97/dasgupta19a/dasgupta19a.pdf},
url = {http://proceedings.mlr.press/v97/dasgupta19a.html},
abstract = {One widely-studied model of \emph{teaching} calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner’s hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown—that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to \emph{shrink} training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.}
}
%0 Conference Paper
%T Teaching a black-box learner
%A Sanjoy Dasgupta
%A Daniel Hsu
%A Stefanos Poulis
%A Xiaojin Zhu
%B Proceedings of the 36th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2019
%E Kamalika Chaudhuri
%E Ruslan Salakhutdinov
%F pmlr-v97-dasgupta19a
%I PMLR
%J Proceedings of Machine Learning Research
%P 1547--1555
%U http://proceedings.mlr.press
%V 97
%W PMLR
%X One widely-studied model of teaching calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner’s hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown—that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to shrink training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.
Dasgupta, S., Hsu, D., Poulis, S. & Zhu, X.. (2019). Teaching a black-box learner. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:1547-1555
This site last compiled Mon, 16 Sep 2019 16:05:04 +0000