[edit]
Online Aggregation of Unbounded Signed Losses Using Shifting Experts
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:3-17, 2017.
Abstract
For the decision theoretic online (DTOL) setting,
we consider methods to construct algorithms that suffer loss not much more than of any sequence of experts
distributed along a time interval (shifting experts setting).
We present a modified version of the method of Mixing Past Posteriors
which uses as basic algorithm AdaHedge with adaptive learning rate.
Due to this, we combine the advantages of both algorithms:
regret bounds are valid in the case of signed unbounded losses of the experts,
also, we use the shifting regret which is a more optimal characteristic of the algorithm.
All results are obtained in the adversarial setting—no assumptions are made about the nature of data source.
We present results of numerical experiments for the case where losses of the experts cannot be bounded in advance.