Online Learning with Continuous Variations: Dynamic Regret and Reductions

Ching-An Cheng, Jonathan Lee, Ken Goldberg, Byron Boots
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2218-2228, 2020.

Abstract

Online learning is a powerful tool for analyzing iterative algorithms. However, the classic adversarial setup fails to capture regularity that can exist in practice. Motivated by this observation, we establish a new setup, called Continuous Online Learning (COL), where the gradient of online loss function changes continuously across rounds with respect to the learner’s decisions. We show that COL appropriately describes many interesting applications, from general equilibrium problems (EPs) to optimization in episodic MDPs. Using this new setup, we revisit the difficulty of sublinear dynamic regret. We prove a fundamental equivalence between achieving sublinear dynamic regret in COL and solving certain EPs. With this insight, we offer conditions for efficient algorithms that achieve sublinear dynamic regret, even when the losses are chosen adaptively without any a priori variation budget. Furthermore, we show for COL a reduction from dynamic regret to both static regret and convergence in the associated EP, allowing us to analyze the dynamic regret of many existing algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-cheng20a, title = {Online Learning with Continuous Variations: Dynamic Regret and Reductions}, author = {Cheng, Ching-An and Lee, Jonathan and Goldberg, Ken and Boots, Byron}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2218--2228}, year = {2020}, editor = {Silvia Chiappa and Roberto Calandra}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/cheng20a/cheng20a.pdf}, url = { http://proceedings.mlr.press/v108/cheng20a.html }, abstract = {Online learning is a powerful tool for analyzing iterative algorithms. However, the classic adversarial setup fails to capture regularity that can exist in practice. Motivated by this observation, we establish a new setup, called Continuous Online Learning (COL), where the gradient of online loss function changes continuously across rounds with respect to the learner’s decisions. We show that COL appropriately describes many interesting applications, from general equilibrium problems (EPs) to optimization in episodic MDPs. Using this new setup, we revisit the difficulty of sublinear dynamic regret. We prove a fundamental equivalence between achieving sublinear dynamic regret in COL and solving certain EPs. With this insight, we offer conditions for efficient algorithms that achieve sublinear dynamic regret, even when the losses are chosen adaptively without any a priori variation budget. Furthermore, we show for COL a reduction from dynamic regret to both static regret and convergence in the associated EP, allowing us to analyze the dynamic regret of many existing algorithms.} }
Endnote
%0 Conference Paper %T Online Learning with Continuous Variations: Dynamic Regret and Reductions %A Ching-An Cheng %A Jonathan Lee %A Ken Goldberg %A Byron Boots %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-cheng20a %I PMLR %P 2218--2228 %U http://proceedings.mlr.press/v108/cheng20a.html %V 108 %X Online learning is a powerful tool for analyzing iterative algorithms. However, the classic adversarial setup fails to capture regularity that can exist in practice. Motivated by this observation, we establish a new setup, called Continuous Online Learning (COL), where the gradient of online loss function changes continuously across rounds with respect to the learner’s decisions. We show that COL appropriately describes many interesting applications, from general equilibrium problems (EPs) to optimization in episodic MDPs. Using this new setup, we revisit the difficulty of sublinear dynamic regret. We prove a fundamental equivalence between achieving sublinear dynamic regret in COL and solving certain EPs. With this insight, we offer conditions for efficient algorithms that achieve sublinear dynamic regret, even when the losses are chosen adaptively without any a priori variation budget. Furthermore, we show for COL a reduction from dynamic regret to both static regret and convergence in the associated EP, allowing us to analyze the dynamic regret of many existing algorithms.
APA
Cheng, C., Lee, J., Goldberg, K. & Boots, B.. (2020). Online Learning with Continuous Variations: Dynamic Regret and Reductions. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2218-2228 Available from http://proceedings.mlr.press/v108/cheng20a.html .

Related Material