Dynamical Models and tracking regret in online convex programming

Eric Hall, Rebecca Willett
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):579-587, 2013.

Abstract

This paper describes a new online convex optimization method which incorporates a family of candidate dynamical models and establishes novel tracking regret bounds that scale with comparator’s deviation from the best dynamical model in this family. Previous online optimization methods are designed to have a total accumulated loss comparable to that of the best comparator sequence, and existing tracking or shifting regret bounds scale with the overall variation of the comparator sequence. In many practical scenarios, however, the environment is nonstationary and comparator sequences with small variation are quite weak, resulting in large losses. The proposed dynamic mirror descent method, in contrast, can yield low regret relative to highly variable comparator sequences by both tracking the best dynamical model and forming predictions based on that model. This concept is demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking a dynamic social network.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-hall13, title = {Dynamical Models and tracking regret in online convex programming}, author = {Hall, Eric and Willett, Rebecca}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {579--587}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/hall13.pdf}, url = {https://proceedings.mlr.press/v28/hall13.html}, abstract = {This paper describes a new online convex optimization method which incorporates a family of candidate dynamical models and establishes novel tracking regret bounds that scale with comparator’s deviation from the best dynamical model in this family. Previous online optimization methods are designed to have a total accumulated loss comparable to that of the best comparator sequence, and existing tracking or shifting regret bounds scale with the overall variation of the comparator sequence. In many practical scenarios, however, the environment is nonstationary and comparator sequences with small variation are quite weak, resulting in large losses. The proposed dynamic mirror descent method, in contrast, can yield low regret relative to highly variable comparator sequences by both tracking the best dynamical model and forming predictions based on that model. This concept is demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking a dynamic social network.} }
Endnote
%0 Conference Paper %T Dynamical Models and tracking regret in online convex programming %A Eric Hall %A Rebecca Willett %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-hall13 %I PMLR %P 579--587 %U https://proceedings.mlr.press/v28/hall13.html %V 28 %N 1 %X This paper describes a new online convex optimization method which incorporates a family of candidate dynamical models and establishes novel tracking regret bounds that scale with comparator’s deviation from the best dynamical model in this family. Previous online optimization methods are designed to have a total accumulated loss comparable to that of the best comparator sequence, and existing tracking or shifting regret bounds scale with the overall variation of the comparator sequence. In many practical scenarios, however, the environment is nonstationary and comparator sequences with small variation are quite weak, resulting in large losses. The proposed dynamic mirror descent method, in contrast, can yield low regret relative to highly variable comparator sequences by both tracking the best dynamical model and forming predictions based on that model. This concept is demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking a dynamic social network.
RIS
TY - CPAPER TI - Dynamical Models and tracking regret in online convex programming AU - Eric Hall AU - Rebecca Willett BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-hall13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 579 EP - 587 L1 - http://proceedings.mlr.press/v28/hall13.pdf UR - https://proceedings.mlr.press/v28/hall13.html AB - This paper describes a new online convex optimization method which incorporates a family of candidate dynamical models and establishes novel tracking regret bounds that scale with comparator’s deviation from the best dynamical model in this family. Previous online optimization methods are designed to have a total accumulated loss comparable to that of the best comparator sequence, and existing tracking or shifting regret bounds scale with the overall variation of the comparator sequence. In many practical scenarios, however, the environment is nonstationary and comparator sequences with small variation are quite weak, resulting in large losses. The proposed dynamic mirror descent method, in contrast, can yield low regret relative to highly variable comparator sequences by both tracking the best dynamical model and forming predictions based on that model. This concept is demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking a dynamic social network. ER -
APA
Hall, E. & Willett, R.. (2013). Dynamical Models and tracking regret in online convex programming. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):579-587 Available from https://proceedings.mlr.press/v28/hall13.html.

Related Material