Sample Compression for Real-Valued Learners

Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:466-488, 2019.

Abstract

We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-hanneke19a, title = {Sample Compression for Real-Valued Learners}, author = {Hanneke, Steve and Kontorovich, Aryeh and Sadigurschi, Menachem}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {466--488}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/hanneke19a/hanneke19a.pdf}, url = {https://proceedings.mlr.press/v98/hanneke19a.html}, abstract = { We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions. } }
Endnote
%0 Conference Paper %T Sample Compression for Real-Valued Learners %A Steve Hanneke %A Aryeh Kontorovich %A Menachem Sadigurschi %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-hanneke19a %I PMLR %P 466--488 %U https://proceedings.mlr.press/v98/hanneke19a.html %V 98 %X We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.
APA
Hanneke, S., Kontorovich, A. & Sadigurschi, M.. (2019). Sample Compression for Real-Valued Learners. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:466-488 Available from https://proceedings.mlr.press/v98/hanneke19a.html.

Related Material