[edit]
Generalization bounds for mixing processes via delayed online-to-PAC conversions
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:23-40, 2025.
Abstract
We study the generalization error of statistical learning algorithms in a non i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.