[edit]
On how complexity affects the stability of a predictor
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:161-167, 2018.
Abstract
Given a finite random sample from a Markov chain environment, we select a predictor that minimizes a criterion function and refer to it as being calibrated to its environment. If its prediction error is not bounded by its criterion value, we say that the criterion fails. We define the predictor’s complexity to be the amount of uncertainty in detecting that the criterion fails given that it fails. We define a predictor’s stability to be the discrepancy between the average number of prediction errors that it makes on two random samples. We show that complexity is inversely proportional to the level of adaptivity of the calibrated predictor to its random environment. The calibrated predictor becomes less stable as its complexity increases or as its level of adaptivity decreases.