Sequential prediction with coded side information under logarithmic loss

Yanina Shkel, Maxim Raginsky, Sergio Verdú
Proceedings of Algorithmic Learning Theory, PMLR 83:753-769, 2018.

Abstract

We study the problem of sequential prediction with coded side information under logarithmic loss (log-loss). We show an operational equivalence between this setup and lossy compression with log-loss distortion. Using this insight, together with recent work on lossy compression with log-loss, we connect prediction strategies with distributions in a certain subset of the probability simplex. This allows us to derive a Shtarkov-like bound for regret and to evaluate the regret for several illustrative classes of experts. In the present work, we mainly focus on the “batch” side information setting with sequential prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v83-shkel18a, title = {Sequential prediction with coded side information under logarithmic loss}, author = {Shkel, Yanina and Raginsky, Maxim and Verdú, Sergio}, booktitle = {Proceedings of Algorithmic Learning Theory}, pages = {753--769}, year = {2018}, editor = {Janoos, Firdaus and Mohri, Mehryar and Sridharan, Karthik}, volume = {83}, series = {Proceedings of Machine Learning Research}, month = {07--09 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v83/shkel18a/shkel18a.pdf}, url = {https://proceedings.mlr.press/v83/shkel18a.html}, abstract = {We study the problem of sequential prediction with coded side information under logarithmic loss (log-loss). We show an operational equivalence between this setup and lossy compression with log-loss distortion. Using this insight, together with recent work on lossy compression with log-loss, we connect prediction strategies with distributions in a certain subset of the probability simplex. This allows us to derive a Shtarkov-like bound for regret and to evaluate the regret for several illustrative classes of experts. In the present work, we mainly focus on the “batch” side information setting with sequential prediction.} }
Endnote
%0 Conference Paper %T Sequential prediction with coded side information under logarithmic loss %A Yanina Shkel %A Maxim Raginsky %A Sergio Verdú %B Proceedings of Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Firdaus Janoos %E Mehryar Mohri %E Karthik Sridharan %F pmlr-v83-shkel18a %I PMLR %P 753--769 %U https://proceedings.mlr.press/v83/shkel18a.html %V 83 %X We study the problem of sequential prediction with coded side information under logarithmic loss (log-loss). We show an operational equivalence between this setup and lossy compression with log-loss distortion. Using this insight, together with recent work on lossy compression with log-loss, we connect prediction strategies with distributions in a certain subset of the probability simplex. This allows us to derive a Shtarkov-like bound for regret and to evaluate the regret for several illustrative classes of experts. In the present work, we mainly focus on the “batch” side information setting with sequential prediction.
APA
Shkel, Y., Raginsky, M. & Verdú, S.. (2018). Sequential prediction with coded side information under logarithmic loss. Proceedings of Algorithmic Learning Theory, in Proceedings of Machine Learning Research 83:753-769 Available from https://proceedings.mlr.press/v83/shkel18a.html.

Related Material