Open Problem: Monotonicity of Learning

Tom Viering, Alexander Mey, Marco Loog
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3198-3201, 2019.

Abstract

We pose the question to what extent a learning algorithm behaves monotonically in the following sense: does it perform better, in expectation, when adding one instance to the training set? We focus on empirical risk minimization and illustrate this property with several examples, two where it does hold and two where it does not. We also relate it to the notion of PAC-learnability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-viering19a, title = {Open Problem: Monotonicity of Learning}, author = {Viering, Tom and Mey, Alexander and Loog, Marco}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {3198--3201}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/viering19a/viering19a.pdf}, url = {https://proceedings.mlr.press/v99/viering19a.html}, abstract = {We pose the question to what extent a learning algorithm behaves monotonically in the following sense: does it perform better, in expectation, when adding one instance to the training set? We focus on empirical risk minimization and illustrate this property with several examples, two where it does hold and two where it does not. We also relate it to the notion of PAC-learnability.} }
Endnote
%0 Conference Paper %T Open Problem: Monotonicity of Learning %A Tom Viering %A Alexander Mey %A Marco Loog %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-viering19a %I PMLR %P 3198--3201 %U https://proceedings.mlr.press/v99/viering19a.html %V 99 %X We pose the question to what extent a learning algorithm behaves monotonically in the following sense: does it perform better, in expectation, when adding one instance to the training set? We focus on empirical risk minimization and illustrate this property with several examples, two where it does hold and two where it does not. We also relate it to the notion of PAC-learnability.
APA
Viering, T., Mey, A. & Loog, M.. (2019). Open Problem: Monotonicity of Learning. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:3198-3201 Available from https://proceedings.mlr.press/v99/viering19a.html.

Related Material