Online Optimization with Gradual Variations

Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, Shenghuo Zhu
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:6.1-6.20, 2012.

Abstract

We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss functions, according to some distance metrics. We show that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation. For the closely related problem of prediction with expert advice, we show that an online algorithm modified from the multiplicative update algorithm can also achieve a similar regret bound for a different measure of deviation. Finally, for loss functions which are strictly convex, we show that an online algorithm modified from the online Newton step algorithm can achieve a regret which is only logarithmic in terms of the deviation, and as an application, we can also have such a logarithmic regret for the portfolio management problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v23-chiang12, title = {Online Optimization with Gradual Variations}, author = {Chiang, Chao-Kai and Yang, Tianbao and Lee, Chia-Jung and Mahdavi, Mehrdad and Lu, Chi-Jen and Jin, Rong and Zhu, Shenghuo}, booktitle = {Proceedings of the 25th Annual Conference on Learning Theory}, pages = {6.1--6.20}, year = {2012}, editor = {Mannor, Shie and Srebro, Nathan and Williamson, Robert C.}, volume = {23}, series = {Proceedings of Machine Learning Research}, address = {Edinburgh, Scotland}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v23/chiang12/chiang12.pdf}, url = {https://proceedings.mlr.press/v23/chiang12.html}, abstract = {We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss functions, according to some distance metrics. We show that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation. For the closely related problem of prediction with expert advice, we show that an online algorithm modified from the multiplicative update algorithm can also achieve a similar regret bound for a different measure of deviation. Finally, for loss functions which are strictly convex, we show that an online algorithm modified from the online Newton step algorithm can achieve a regret which is only logarithmic in terms of the deviation, and as an application, we can also have such a logarithmic regret for the portfolio management problem.} }
Endnote
%0 Conference Paper %T Online Optimization with Gradual Variations %A Chao-Kai Chiang %A Tianbao Yang %A Chia-Jung Lee %A Mehrdad Mahdavi %A Chi-Jen Lu %A Rong Jin %A Shenghuo Zhu %B Proceedings of the 25th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2012 %E Shie Mannor %E Nathan Srebro %E Robert C. Williamson %F pmlr-v23-chiang12 %I PMLR %P 6.1--6.20 %U https://proceedings.mlr.press/v23/chiang12.html %V 23 %X We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss functions, according to some distance metrics. We show that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation. For the closely related problem of prediction with expert advice, we show that an online algorithm modified from the multiplicative update algorithm can also achieve a similar regret bound for a different measure of deviation. Finally, for loss functions which are strictly convex, we show that an online algorithm modified from the online Newton step algorithm can achieve a regret which is only logarithmic in terms of the deviation, and as an application, we can also have such a logarithmic regret for the portfolio management problem.
RIS
TY - CPAPER TI - Online Optimization with Gradual Variations AU - Chao-Kai Chiang AU - Tianbao Yang AU - Chia-Jung Lee AU - Mehrdad Mahdavi AU - Chi-Jen Lu AU - Rong Jin AU - Shenghuo Zhu BT - Proceedings of the 25th Annual Conference on Learning Theory DA - 2012/06/16 ED - Shie Mannor ED - Nathan Srebro ED - Robert C. Williamson ID - pmlr-v23-chiang12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 23 SP - 6.1 EP - 6.20 L1 - http://proceedings.mlr.press/v23/chiang12/chiang12.pdf UR - https://proceedings.mlr.press/v23/chiang12.html AB - We study the online convex optimization problem, in which an online algorithm has to make repeated decisions with convex loss functions and hopes to achieve a small regret. We consider a natural restriction of this problem in which the loss functions have a small deviation, measured by the sum of the distances between every two consecutive loss functions, according to some distance metrics. We show that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation. For the closely related problem of prediction with expert advice, we show that an online algorithm modified from the multiplicative update algorithm can also achieve a similar regret bound for a different measure of deviation. Finally, for loss functions which are strictly convex, we show that an online algorithm modified from the online Newton step algorithm can achieve a regret which is only logarithmic in terms of the deviation, and as an application, we can also have such a logarithmic regret for the portfolio management problem. ER -
APA
Chiang, C., Yang, T., Lee, C., Mahdavi, M., Lu, C., Jin, R. & Zhu, S.. (2012). Online Optimization with Gradual Variations. Proceedings of the 25th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 23:6.1-6.20 Available from https://proceedings.mlr.press/v23/chiang12.html.

Related Material