On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy

Zeyu Jia, Alexander Rakhlin, Yury Polyanskiy
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:2958-3016, 2025.

Abstract

We study the problem of sequential probability assignment under logarithmic loss, both with and without side information. Our objective is to analyze the \emph{minimax regret}—a notion extensively studied in the literature—in terms of geometric quantities, such as covering numbers and scale-sensitive dimensions. We show that the minimax regret for the case of no side information (equivalently, the Shtarkov sum) can be upper bounded in terms of \emph{sequential square-root entropy}, a notion closely related to Hellinger distance. For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy. The lower bound matches the upper bound, up to log factors, for classes in the Donsker regime (according to our definition of entropy).

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-jia25a, title = {On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy}, author = {Jia, Zeyu and Rakhlin, Alexander and Polyanskiy, Yury}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {2958--3016}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/jia25a/jia25a.pdf}, url = {https://proceedings.mlr.press/v291/jia25a.html}, abstract = {We study the problem of sequential probability assignment under logarithmic loss, both with and without side information. Our objective is to analyze the \emph{minimax regret}—a notion extensively studied in the literature—in terms of geometric quantities, such as covering numbers and scale-sensitive dimensions. We show that the minimax regret for the case of no side information (equivalently, the Shtarkov sum) can be upper bounded in terms of \emph{sequential square-root entropy}, a notion closely related to Hellinger distance. For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy. The lower bound matches the upper bound, up to log factors, for classes in the Donsker regime (according to our definition of entropy). } }
Endnote
%0 Conference Paper %T On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy %A Zeyu Jia %A Alexander Rakhlin %A Yury Polyanskiy %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-jia25a %I PMLR %P 2958--3016 %U https://proceedings.mlr.press/v291/jia25a.html %V 291 %X We study the problem of sequential probability assignment under logarithmic loss, both with and without side information. Our objective is to analyze the \emph{minimax regret}—a notion extensively studied in the literature—in terms of geometric quantities, such as covering numbers and scale-sensitive dimensions. We show that the minimax regret for the case of no side information (equivalently, the Shtarkov sum) can be upper bounded in terms of \emph{sequential square-root entropy}, a notion closely related to Hellinger distance. For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy. The lower bound matches the upper bound, up to log factors, for classes in the Donsker regime (according to our definition of entropy).
APA
Jia, Z., Rakhlin, A. & Polyanskiy, Y.. (2025). On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:2958-3016 Available from https://proceedings.mlr.press/v291/jia25a.html.

Related Material