Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm

Jacob Steinhardt, Percy Liang
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1593-1601, 2014.

Abstract

We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on results by Hazan & Kale (2008) and Chiang et al. (2012), and resolving an open problem posed by Kale (2012). Our techniques naturally extend to matrix-valued loss functions, where we present an adaptive matrix exponentiated gradient algorithm. To obtain the optimal regret bound in the matrix case, we generalize the Follow-the-Regularized-Leader algorithm to vector-valued payoffs, which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-steinhardtb14, title = {Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm}, author = {Steinhardt, Jacob and Liang, Percy}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1593--1601}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/steinhardtb14.pdf}, url = {https://proceedings.mlr.press/v32/steinhardtb14.html}, abstract = {We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on results by Hazan & Kale (2008) and Chiang et al. (2012), and resolving an open problem posed by Kale (2012). Our techniques naturally extend to matrix-valued loss functions, where we present an adaptive matrix exponentiated gradient algorithm. To obtain the optimal regret bound in the matrix case, we generalize the Follow-the-Regularized-Leader algorithm to vector-valued payoffs, which may be of independent interest.} }
Endnote
%0 Conference Paper %T Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm %A Jacob Steinhardt %A Percy Liang %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-steinhardtb14 %I PMLR %P 1593--1601 %U https://proceedings.mlr.press/v32/steinhardtb14.html %V 32 %N 2 %X We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on results by Hazan & Kale (2008) and Chiang et al. (2012), and resolving an open problem posed by Kale (2012). Our techniques naturally extend to matrix-valued loss functions, where we present an adaptive matrix exponentiated gradient algorithm. To obtain the optimal regret bound in the matrix case, we generalize the Follow-the-Regularized-Leader algorithm to vector-valued payoffs, which may be of independent interest.
RIS
TY - CPAPER TI - Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm AU - Jacob Steinhardt AU - Percy Liang BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-steinhardtb14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1593 EP - 1601 L1 - http://proceedings.mlr.press/v32/steinhardtb14.pdf UR - https://proceedings.mlr.press/v32/steinhardtb14.html AB - We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on results by Hazan & Kale (2008) and Chiang et al. (2012), and resolving an open problem posed by Kale (2012). Our techniques naturally extend to matrix-valued loss functions, where we present an adaptive matrix exponentiated gradient algorithm. To obtain the optimal regret bound in the matrix case, we generalize the Follow-the-Regularized-Leader algorithm to vector-valued payoffs, which may be of independent interest. ER -
APA
Steinhardt, J. & Liang, P.. (2014). Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1593-1601 Available from https://proceedings.mlr.press/v32/steinhardtb14.html.

Related Material