Learning without concentration

Shahar Mendelson
Proceedings of The 27th Conference on Learning Theory, PMLR 35:25-39, 2014.

Abstract

We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-mendelson14, title = {Learning without concentration}, author = {Mendelson, Shahar}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {25--39}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/mendelson14.pdf}, url = {https://proceedings.mlr.press/v35/mendelson14.html}, abstract = {We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates. } }
Endnote
%0 Conference Paper %T Learning without concentration %A Shahar Mendelson %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-mendelson14 %I PMLR %P 25--39 %U https://proceedings.mlr.press/v35/mendelson14.html %V 35 %X We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates.
RIS
TY - CPAPER TI - Learning without concentration AU - Shahar Mendelson BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-mendelson14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 25 EP - 39 L1 - http://proceedings.mlr.press/v35/mendelson14.pdf UR - https://proceedings.mlr.press/v35/mendelson14.html AB - We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates. ER -
APA
Mendelson, S.. (2014). Learning without concentration. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:25-39 Available from https://proceedings.mlr.press/v35/mendelson14.html.

Related Material