Online mirror descent and dual averaging: keeping pace in the dynamic case

Huang Fang, Nick Harvey, Victor Portella, Michael Friedlander
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3008-3017, 2020.

Abstract

Online mirror descent (OMD) and dual averaging (DA)—two fundamental algorithms for online convex optimization—are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate. Under dynamic learning rates, however, OMD is provably inferior to DA and suffers a linear regret, even in common settings such as prediction with expert advice. We modify the OMD algorithm through a simple technique that we call stabilization. We give essentially the same abstract regret bound for OMD with stabilization and for DA by modifying the classical OMD convergence analysis in a careful and modular way that allows for straightforward and flexible proofs. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications—even under dynamic learning rates. We also shed light on the similarities between OMD and DA and show simple conditions under which stabilized-OMD and DA generate the same iterates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-fang20a, title = {Online mirror descent and dual averaging: keeping pace in the dynamic case}, author = {Fang, Huang and Harvey, Nick and Portella, Victor and Friedlander, Michael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3008--3017}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/fang20a/fang20a.pdf}, url = {https://proceedings.mlr.press/v119/fang20a.html}, abstract = {Online mirror descent (OMD) and dual averaging (DA)—two fundamental algorithms for online convex optimization—are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate. Under dynamic learning rates, however, OMD is provably inferior to DA and suffers a linear regret, even in common settings such as prediction with expert advice. We modify the OMD algorithm through a simple technique that we call stabilization. We give essentially the same abstract regret bound for OMD with stabilization and for DA by modifying the classical OMD convergence analysis in a careful and modular way that allows for straightforward and flexible proofs. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications—even under dynamic learning rates. We also shed light on the similarities between OMD and DA and show simple conditions under which stabilized-OMD and DA generate the same iterates.} }
Endnote
%0 Conference Paper %T Online mirror descent and dual averaging: keeping pace in the dynamic case %A Huang Fang %A Nick Harvey %A Victor Portella %A Michael Friedlander %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-fang20a %I PMLR %P 3008--3017 %U https://proceedings.mlr.press/v119/fang20a.html %V 119 %X Online mirror descent (OMD) and dual averaging (DA)—two fundamental algorithms for online convex optimization—are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate. Under dynamic learning rates, however, OMD is provably inferior to DA and suffers a linear regret, even in common settings such as prediction with expert advice. We modify the OMD algorithm through a simple technique that we call stabilization. We give essentially the same abstract regret bound for OMD with stabilization and for DA by modifying the classical OMD convergence analysis in a careful and modular way that allows for straightforward and flexible proofs. Simple corollaries of these bounds show that OMD with stabilization and DA enjoy the same performance guarantees in many applications—even under dynamic learning rates. We also shed light on the similarities between OMD and DA and show simple conditions under which stabilized-OMD and DA generate the same iterates.
APA
Fang, H., Harvey, N., Portella, V. & Friedlander, M.. (2020). Online mirror descent and dual averaging: keeping pace in the dynamic case. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3008-3017 Available from https://proceedings.mlr.press/v119/fang20a.html.

Related Material