Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

Peng Zhao, Lijun Zhang
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:48-59, 2021.

Abstract

In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most O(min{P_T,S_T}), where P_T and S_T are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to O(min{P_T,S_T,V_T}), where V_T is the function variation of the online functions. Note that the quantities of P_T, S_T, V_T essentially reflect different aspects of environmental non-stationarity—they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a \emph{best-of-three-worlds} guarantee, and is strictly tighter than previous results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-zhao21a, title = {Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions}, author = {Zhao, Peng and Zhang, Lijun}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {48--59}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/zhao21a/zhao21a.pdf}, url = {https://proceedings.mlr.press/v144/zhao21a.html}, abstract = {In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most O(min{P_T,S_T}), where P_T and S_T are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to O(min{P_T,S_T,V_T}), where V_T is the function variation of the online functions. Note that the quantities of P_T, S_T, V_T essentially reflect different aspects of environmental non-stationarity—they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a \emph{best-of-three-worlds} guarantee, and is strictly tighter than previous results.} }
Endnote
%0 Conference Paper %T Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions %A Peng Zhao %A Lijun Zhang %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-zhao21a %I PMLR %P 48--59 %U https://proceedings.mlr.press/v144/zhao21a.html %V 144 %X In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most O(min{P_T,S_T}), where P_T and S_T are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to O(min{P_T,S_T,V_T}), where V_T is the function variation of the online functions. Note that the quantities of P_T, S_T, V_T essentially reflect different aspects of environmental non-stationarity—they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a \emph{best-of-three-worlds} guarantee, and is strictly tighter than previous results.
APA
Zhao, P. & Zhang, L.. (2021). Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:48-59 Available from https://proceedings.mlr.press/v144/zhao21a.html.

Related Material