Online Linear Optimization via Smoothing

Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari
Proceedings of The 27th Conference on Learning Theory, PMLR 35:807-823, 2014.

Abstract

We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization. We show that adding a strongly convex penalty function to the decision rule and adding stochastic perturbations to data correspond to deterministic and stochastic smoothing operations, respectively. We establish an equivalence between “Follow the Regularized Leader” and “Follow the Perturbed Leader” up to the smoothness properties. This intuition leads to a new generic analysis framework that recovers and improves the previous known regret bounds of the class of algorithms commonly known as Follow the Perturbed Leader.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-abernethy14, title = {Online Linear Optimization via Smoothing}, author = {Abernethy, Jacob and Lee, Chansoo and Sinha, Abhinav and Tewari, Ambuj}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {807--823}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/abernethy14.pdf}, url = {https://proceedings.mlr.press/v35/abernethy14.html}, abstract = {We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization. We show that adding a strongly convex penalty function to the decision rule and adding stochastic perturbations to data correspond to deterministic and stochastic smoothing operations, respectively. We establish an equivalence between “Follow the Regularized Leader” and “Follow the Perturbed Leader” up to the smoothness properties. This intuition leads to a new generic analysis framework that recovers and improves the previous known regret bounds of the class of algorithms commonly known as Follow the Perturbed Leader.} }
Endnote
%0 Conference Paper %T Online Linear Optimization via Smoothing %A Jacob Abernethy %A Chansoo Lee %A Abhinav Sinha %A Ambuj Tewari %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-abernethy14 %I PMLR %P 807--823 %U https://proceedings.mlr.press/v35/abernethy14.html %V 35 %X We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization. We show that adding a strongly convex penalty function to the decision rule and adding stochastic perturbations to data correspond to deterministic and stochastic smoothing operations, respectively. We establish an equivalence between “Follow the Regularized Leader” and “Follow the Perturbed Leader” up to the smoothness properties. This intuition leads to a new generic analysis framework that recovers and improves the previous known regret bounds of the class of algorithms commonly known as Follow the Perturbed Leader.
RIS
TY - CPAPER TI - Online Linear Optimization via Smoothing AU - Jacob Abernethy AU - Chansoo Lee AU - Abhinav Sinha AU - Ambuj Tewari BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-abernethy14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 807 EP - 823 L1 - http://proceedings.mlr.press/v35/abernethy14.pdf UR - https://proceedings.mlr.press/v35/abernethy14.html AB - We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization. We show that adding a strongly convex penalty function to the decision rule and adding stochastic perturbations to data correspond to deterministic and stochastic smoothing operations, respectively. We establish an equivalence between “Follow the Regularized Leader” and “Follow the Perturbed Leader” up to the smoothness properties. This intuition leads to a new generic analysis framework that recovers and improves the previous known regret bounds of the class of algorithms commonly known as Follow the Perturbed Leader. ER -
APA
Abernethy, J., Lee, C., Sinha, A. & Tewari, A.. (2014). Online Linear Optimization via Smoothing. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:807-823 Available from https://proceedings.mlr.press/v35/abernethy14.html.

Related Material