Online Aggregation of Unbounded Signed Losses Using Shifting Experts

Vladimir V. V’yugin
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:3-17, 2017.

Abstract

For the decision theoretic online (DTOL) setting, we consider methods to construct algorithms that suffer loss not much more than of any sequence of experts distributed along a time interval (shifting experts setting). We present a modified version of the method of Mixing Past Posteriors which uses as basic algorithm AdaHedge with adaptive learning rate. Due to this, we combine the advantages of both algorithms: regret bounds are valid in the case of signed unbounded losses of the experts, also, we use the shifting regret which is a more optimal characteristic of the algorithm. All results are obtained in the adversarial setting—no assumptions are made about the nature of data source. We present results of numerical experiments for the case where losses of the experts cannot be bounded in advance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v60-v’yugin17a, title = {Online Aggregation of Unbounded Signed Losses Using Shifting Experts}, author = {V’yugin, Vladimir V.}, booktitle = {Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications}, pages = {3--17}, year = {2017}, editor = {Gammerman, Alex and Vovk, Vladimir and Luo, Zhiyuan and Papadopoulos, Harris}, volume = {60}, series = {Proceedings of Machine Learning Research}, month = {13--16 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v60/v’yugin17a/v’yugin17a.pdf}, url = {https://proceedings.mlr.press/v60/v-yugin17a.html}, abstract = {For the decision theoretic online (DTOL) setting, we consider methods to construct algorithms that suffer loss not much more than of any sequence of experts distributed along a time interval (shifting experts setting). We present a modified version of the method of Mixing Past Posteriors which uses as basic algorithm AdaHedge with adaptive learning rate. Due to this, we combine the advantages of both algorithms: regret bounds are valid in the case of signed unbounded losses of the experts, also, we use the shifting regret which is a more optimal characteristic of the algorithm. All results are obtained in the adversarial setting—no assumptions are made about the nature of data source. We present results of numerical experiments for the case where losses of the experts cannot be bounded in advance.} }
Endnote
%0 Conference Paper %T Online Aggregation of Unbounded Signed Losses Using Shifting Experts %A Vladimir V. V’yugin %B Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications %C Proceedings of Machine Learning Research %D 2017 %E Alex Gammerman %E Vladimir Vovk %E Zhiyuan Luo %E Harris Papadopoulos %F pmlr-v60-v’yugin17a %I PMLR %P 3--17 %U https://proceedings.mlr.press/v60/v-yugin17a.html %V 60 %X For the decision theoretic online (DTOL) setting, we consider methods to construct algorithms that suffer loss not much more than of any sequence of experts distributed along a time interval (shifting experts setting). We present a modified version of the method of Mixing Past Posteriors which uses as basic algorithm AdaHedge with adaptive learning rate. Due to this, we combine the advantages of both algorithms: regret bounds are valid in the case of signed unbounded losses of the experts, also, we use the shifting regret which is a more optimal characteristic of the algorithm. All results are obtained in the adversarial setting—no assumptions are made about the nature of data source. We present results of numerical experiments for the case where losses of the experts cannot be bounded in advance.
APA
V’yugin, V.V.. (2017). Online Aggregation of Unbounded Signed Losses Using Shifting Experts. Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research 60:3-17 Available from https://proceedings.mlr.press/v60/v-yugin17a.html.

Related Material