Generalization bounds for mixing processes via delayed online-to-PAC conversions

Baptiste Abélès, Eugenio Clerico, Gergely Neu
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:23-40, 2025.

Abstract

We study the generalization error of statistical learning algorithms in a non i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-abeles25a, title = {Generalization bounds for mixing processes via delayed {online-to-PAC} conversions}, author = {Ab\'el\`es, Baptiste and Clerico, Eugenio and Neu, Gergely}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {23--40}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/abeles25a/abeles25a.pdf}, url = {https://proceedings.mlr.press/v272/abeles25a.html}, abstract = {We study the generalization error of statistical learning algorithms in a non i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.} }
Endnote
%0 Conference Paper %T Generalization bounds for mixing processes via delayed online-to-PAC conversions %A Baptiste Abélès %A Eugenio Clerico %A Gergely Neu %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-abeles25a %I PMLR %P 23--40 %U https://proceedings.mlr.press/v272/abeles25a.html %V 272 %X We study the generalization error of statistical learning algorithms in a non i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.
APA
Abélès, B., Clerico, E. & Neu, G.. (2025). Generalization bounds for mixing processes via delayed online-to-PAC conversions. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:23-40 Available from https://proceedings.mlr.press/v272/abeles25a.html.

Related Material