Universal Online Learning with Unbounded Losses: Memory Is All You Need

Moïse Blanchard, Romain Cosson, Steve Hanneke
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:107-127, 2022.

Abstract

We resolve an open problem of Hanneke (2021) on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. Hanneke (2021) posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space. This technique may be of independent interest in providing useful arguments towards solving the remaining open question of optimistically universal online learning for bounded losses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-blanchard22a, title = {Universal Online Learning with Unbounded Losses: Memory Is All You Need}, author = {Blanchard, Mo\"ise and Cosson, Romain and Hanneke, Steve}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {107--127}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/blanchard22a/blanchard22a.pdf}, url = {https://proceedings.mlr.press/v167/blanchard22a.html}, abstract = {We resolve an open problem of Hanneke (2021) on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. Hanneke (2021) posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space. This technique may be of independent interest in providing useful arguments towards solving the remaining open question of optimistically universal online learning for bounded losses.} }
Endnote
%0 Conference Paper %T Universal Online Learning with Unbounded Losses: Memory Is All You Need %A Moïse Blanchard %A Romain Cosson %A Steve Hanneke %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-blanchard22a %I PMLR %P 107--127 %U https://proceedings.mlr.press/v167/blanchard22a.html %V 167 %X We resolve an open problem of Hanneke (2021) on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. Hanneke (2021) posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space. This technique may be of independent interest in providing useful arguments towards solving the remaining open question of optimistically universal online learning for bounded losses.
APA
Blanchard, M., Cosson, R. & Hanneke, S.. (2022). Universal Online Learning with Unbounded Losses: Memory Is All You Need. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:107-127 Available from https://proceedings.mlr.press/v167/blanchard22a.html.

Related Material