Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice

Yishay Mansour, Richard Nock, Robert Williamson
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23706-23742, 2023.

Abstract

A landmark negative result of Long and Servedio has had a considerable impact on research and development in boosting algorithms, around the now famous tagline that "noise defeats all convex boosters". In this paper, we appeal to the half-century+ founding theory of losses for class probability estimation, an extension of Long and Servedio’s results and a new general convex booster to demonstrate that the source of their negative result is in fact the model class, linear separators. Losses or algorithms are neither to blame. This leads us to a discussion on an otherwise praised aspect of ML, parameterisation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mansour23a, title = {Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice}, author = {Mansour, Yishay and Nock, Richard and Williamson, Robert}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23706--23742}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mansour23a/mansour23a.pdf}, url = {https://proceedings.mlr.press/v202/mansour23a.html}, abstract = {A landmark negative result of Long and Servedio has had a considerable impact on research and development in boosting algorithms, around the now famous tagline that "noise defeats all convex boosters". In this paper, we appeal to the half-century+ founding theory of losses for class probability estimation, an extension of Long and Servedio’s results and a new general convex booster to demonstrate that the source of their negative result is in fact the model class, linear separators. Losses or algorithms are neither to blame. This leads us to a discussion on an otherwise praised aspect of ML, parameterisation.} }
Endnote
%0 Conference Paper %T Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice %A Yishay Mansour %A Richard Nock %A Robert Williamson %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mansour23a %I PMLR %P 23706--23742 %U https://proceedings.mlr.press/v202/mansour23a.html %V 202 %X A landmark negative result of Long and Servedio has had a considerable impact on research and development in boosting algorithms, around the now famous tagline that "noise defeats all convex boosters". In this paper, we appeal to the half-century+ founding theory of losses for class probability estimation, an extension of Long and Servedio’s results and a new general convex booster to demonstrate that the source of their negative result is in fact the model class, linear separators. Losses or algorithms are neither to blame. This leads us to a discussion on an otherwise praised aspect of ML, parameterisation.
APA
Mansour, Y., Nock, R. & Williamson, R.. (2023). Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23706-23742 Available from https://proceedings.mlr.press/v202/mansour23a.html.

Related Material