The Hedge Algorithm on a Continuum

Walid Krichene, Maximilian Balandat, Claire Tomlin, Alexandre Bayen
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:824-832, 2015.

Abstract

We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrtt \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-krichene15, title = {The Hedge Algorithm on a Continuum}, author = {Krichene, Walid and Balandat, Maximilian and Tomlin, Claire and Bayen, Alexandre}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {824--832}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/krichene15.pdf}, url = { http://proceedings.mlr.press/v37/krichene15.html }, abstract = {We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrtt \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S.} }
Endnote
%0 Conference Paper %T The Hedge Algorithm on a Continuum %A Walid Krichene %A Maximilian Balandat %A Claire Tomlin %A Alexandre Bayen %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-krichene15 %I PMLR %P 824--832 %U http://proceedings.mlr.press/v37/krichene15.html %V 37 %X We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrtt \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S.
RIS
TY - CPAPER TI - The Hedge Algorithm on a Continuum AU - Walid Krichene AU - Maximilian Balandat AU - Claire Tomlin AU - Alexandre Bayen BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-krichene15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 824 EP - 832 L1 - http://proceedings.mlr.press/v37/krichene15.pdf UR - http://proceedings.mlr.press/v37/krichene15.html AB - We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrtt \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S. ER -
APA
Krichene, W., Balandat, M., Tomlin, C. & Bayen, A.. (2015). The Hedge Algorithm on a Continuum. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:824-832 Available from http://proceedings.mlr.press/v37/krichene15.html .

Related Material