A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes

Steve Hanneke, Aryeh Kontorovich
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:489-505, 2019.

Abstract

We establish a tight characterization of the worst-case rates for the excess risk of agnostic learning with sample compression schemes and for uniform convergence for agnostic sample compression schemes. In particular, we find that the optimal rates of convergence for size-$k$ agnostic sample compression schemes are of the form $\sqrt{\frac{k \log(n/k)}{n}}$, which contrasts with agnostic learning with classes of VC dimension $k$, where the optimal rates are of the form $\sqrt{\frac{k}{n}}$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-hanneke19b, title = {A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes}, author = {Hanneke, Steve and Kontorovich, Aryeh}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {489--505}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/hanneke19b/hanneke19b.pdf}, url = {https://proceedings.mlr.press/v98/hanneke19b.html}, abstract = { We establish a tight characterization of the worst-case rates for the excess risk of agnostic learning with sample compression schemes and for uniform convergence for agnostic sample compression schemes. In particular, we find that the optimal rates of convergence for size-$k$ agnostic sample compression schemes are of the form $\sqrt{\frac{k \log(n/k)}{n}}$, which contrasts with agnostic learning with classes of VC dimension $k$, where the optimal rates are of the form $\sqrt{\frac{k}{n}}$. } }
Endnote
%0 Conference Paper %T A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes %A Steve Hanneke %A Aryeh Kontorovich %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-hanneke19b %I PMLR %P 489--505 %U https://proceedings.mlr.press/v98/hanneke19b.html %V 98 %X We establish a tight characterization of the worst-case rates for the excess risk of agnostic learning with sample compression schemes and for uniform convergence for agnostic sample compression schemes. In particular, we find that the optimal rates of convergence for size-$k$ agnostic sample compression schemes are of the form $\sqrt{\frac{k \log(n/k)}{n}}$, which contrasts with agnostic learning with classes of VC dimension $k$, where the optimal rates are of the form $\sqrt{\frac{k}{n}}$.
APA
Hanneke, S. & Kontorovich, A.. (2019). A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:489-505 Available from https://proceedings.mlr.press/v98/hanneke19b.html.

Related Material