Monotone Learning

Olivier J Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:842-866, 2022.

Abstract

The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm.This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm. We derive a general result in multiclass classification, showing that every learning algorithm $A$ can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to $A$. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye, Gyorfi, and Lugosi (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021). Our general transformation readily implies monotone learners in a variety of contexts: for example, Pestov’s result follows by applying it on \emph{any} Bayes-consistent algorithm (e.g., $k$-Nearest-Neighbours). In fact, our transformation extends Pestov’s result to classification tasks with an arbitrary number of labels. This is contrast with Pestov’s work which is tailored to binary classification. In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions asked by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021)

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-bousquet22a, title = {Monotone Learning}, author = {Bousquet, Olivier J and Daniely, Amit and Kaplan, Haim and Mansour, Yishay and Moran, Shay and Stemmer, Uri}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {842--866}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/bousquet22a/bousquet22a.pdf}, url = {https://proceedings.mlr.press/v178/bousquet22a.html}, abstract = {The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm.This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm. We derive a general result in multiclass classification, showing that every learning algorithm $A$ can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to $A$. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye, Gyorfi, and Lugosi (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021). Our general transformation readily implies monotone learners in a variety of contexts: for example, Pestov’s result follows by applying it on \emph{any} Bayes-consistent algorithm (e.g., $k$-Nearest-Neighbours). In fact, our transformation extends Pestov’s result to classification tasks with an arbitrary number of labels. This is contrast with Pestov’s work which is tailored to binary classification. In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions asked by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021)} }
Endnote
%0 Conference Paper %T Monotone Learning %A Olivier J Bousquet %A Amit Daniely %A Haim Kaplan %A Yishay Mansour %A Shay Moran %A Uri Stemmer %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-bousquet22a %I PMLR %P 842--866 %U https://proceedings.mlr.press/v178/bousquet22a.html %V 178 %X The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm.This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm. We derive a general result in multiclass classification, showing that every learning algorithm $A$ can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to $A$. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye, Gyorfi, and Lugosi (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021). Our general transformation readily implies monotone learners in a variety of contexts: for example, Pestov’s result follows by applying it on \emph{any} Bayes-consistent algorithm (e.g., $k$-Nearest-Neighbours). In fact, our transformation extends Pestov’s result to classification tasks with an arbitrary number of labels. This is contrast with Pestov’s work which is tailored to binary classification. In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions asked by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021)
APA
Bousquet, O.J., Daniely, A., Kaplan, H., Mansour, Y., Moran, S. & Stemmer, U.. (2022). Monotone Learning. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:842-866 Available from https://proceedings.mlr.press/v178/bousquet22a.html.

Related Material