Adaptive Regret of Convex and Smooth Functions

Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7414-7423, 2019.

Abstract

We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhang19j, title = {Adaptive Regret of Convex and Smooth Functions}, author = {Zhang, Lijun and Liu, Tie-Yan and Zhou, Zhi-Hua}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7414--7423}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhang19j/zhang19j.pdf}, url = {https://proceedings.mlr.press/v97/zhang19j.html}, abstract = {We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss.} }
Endnote
%0 Conference Paper %T Adaptive Regret of Convex and Smooth Functions %A Lijun Zhang %A Tie-Yan Liu %A Zhi-Hua Zhou %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhang19j %I PMLR %P 7414--7423 %U https://proceedings.mlr.press/v97/zhang19j.html %V 97 %X We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss.
APA
Zhang, L., Liu, T. & Zhou, Z.. (2019). Adaptive Regret of Convex and Smooth Functions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7414-7423 Available from https://proceedings.mlr.press/v97/zhang19j.html.

Related Material