Sample Compression for Real-Valued Learners

Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
; Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:466-488, 2019.

Abstract

We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-hanneke19a, title = {Sample Compression for Real-Valued Learners}, author = {Hanneke, Steve and Kontorovich, Aryeh and Sadigurschi, Menachem}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {466--488}, year = {2019}, editor = {Aurélien Garivier and Satyen Kale}, volume = {98}, series = {Proceedings of Machine Learning Research}, address = {Chicago, Illinois}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/hanneke19a/hanneke19a.pdf}, url = {http://proceedings.mlr.press/v98/hanneke19a.html}, abstract = { We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions. } }
Endnote
%0 Conference Paper %T Sample Compression for Real-Valued Learners %A Steve Hanneke %A Aryeh Kontorovich %A Menachem Sadigurschi %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-hanneke19a %I PMLR %J Proceedings of Machine Learning Research %P 466--488 %U http://proceedings.mlr.press %V 98 %W PMLR %X We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.
APA
Hanneke, S., Kontorovich, A. & Sadigurschi, M.. (2019). Sample Compression for Real-Valued Learners. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in PMLR 98:466-488

Related Material