Open Problem: Monotonicity of Learning
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3198-3201, 2019.
We pose the question to what extent a learning algorithm behaves monotonically in the following sense: does it perform better, in expectation, when adding one instance to the training set? We focus on empirical risk minimization and illustrate this property with several examples, two where it does hold and two where it does not. We also relate it to the notion of PAC-learnability.