Faster online calibration without randomization: interval forecasts and the power of two choices

Chirag Gupta, Aaditya Ramdas
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4283-4309, 2022.

Abstract

We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature. Following the seminal paper of Foster and Vohra (1998), nature is often modeled as an adaptive adversary who sees all activity of the forecaster except the randomization that the forecaster may deploy. A number of papers have proposed randomized forecasting strategies that achieve an $\epsilon$-calibration error rate of $O(1/\sqrt{T})$, which we prove is tight in general. On the other hand, it is well known that it is not possible to be calibrated without randomization, or if nature also sees the forecaster’s randomization; in both cases the calibration error could be $\Omega(1)$. Inspired by the equally seminal works on the power of two choices and imprecise probability theory, we study a small variant of the standard online calibration problem. The adversary gives the forecaster the option of making two nearby probabilistic forecasts, or equivalently an interval forecast of small width, and the endpoint closest to the revealed outcome is used to judge calibration. This power of two choices, or imprecise forecast, accords the forecaster with significant power—we show that a faster $\epsilon$-calibration rate of $O(1/T)$ can be achieved even without deploying any randomization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-gupta22b, title = {Faster online calibration without randomization: interval forecasts and the power of two choices}, author = {Gupta, Chirag and Ramdas, Aaditya}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {4283--4309}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/gupta22b/gupta22b.pdf}, url = {https://proceedings.mlr.press/v178/gupta22b.html}, abstract = {We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature. Following the seminal paper of Foster and Vohra (1998), nature is often modeled as an adaptive adversary who sees all activity of the forecaster except the randomization that the forecaster may deploy. A number of papers have proposed randomized forecasting strategies that achieve an $\epsilon$-calibration error rate of $O(1/\sqrt{T})$, which we prove is tight in general. On the other hand, it is well known that it is not possible to be calibrated without randomization, or if nature also sees the forecaster’s randomization; in both cases the calibration error could be $\Omega(1)$. Inspired by the equally seminal works on the power of two choices and imprecise probability theory, we study a small variant of the standard online calibration problem. The adversary gives the forecaster the option of making two nearby probabilistic forecasts, or equivalently an interval forecast of small width, and the endpoint closest to the revealed outcome is used to judge calibration. This power of two choices, or imprecise forecast, accords the forecaster with significant power—we show that a faster $\epsilon$-calibration rate of $O(1/T)$ can be achieved even without deploying any randomization.} }
Endnote
%0 Conference Paper %T Faster online calibration without randomization: interval forecasts and the power of two choices %A Chirag Gupta %A Aaditya Ramdas %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-gupta22b %I PMLR %P 4283--4309 %U https://proceedings.mlr.press/v178/gupta22b.html %V 178 %X We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature. Following the seminal paper of Foster and Vohra (1998), nature is often modeled as an adaptive adversary who sees all activity of the forecaster except the randomization that the forecaster may deploy. A number of papers have proposed randomized forecasting strategies that achieve an $\epsilon$-calibration error rate of $O(1/\sqrt{T})$, which we prove is tight in general. On the other hand, it is well known that it is not possible to be calibrated without randomization, or if nature also sees the forecaster’s randomization; in both cases the calibration error could be $\Omega(1)$. Inspired by the equally seminal works on the power of two choices and imprecise probability theory, we study a small variant of the standard online calibration problem. The adversary gives the forecaster the option of making two nearby probabilistic forecasts, or equivalently an interval forecast of small width, and the endpoint closest to the revealed outcome is used to judge calibration. This power of two choices, or imprecise forecast, accords the forecaster with significant power—we show that a faster $\epsilon$-calibration rate of $O(1/T)$ can be achieved even without deploying any randomization.
APA
Gupta, C. & Ramdas, A.. (2022). Faster online calibration without randomization: interval forecasts and the power of two choices. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:4283-4309 Available from https://proceedings.mlr.press/v178/gupta22b.html.

Related Material