Online Optimization of Smoothed Piecewise Constant Functions

Vincent Cohen-Addad, Varun Kanade
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:412-420, 2017.

Abstract

We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. This is with good reason-any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)-in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-cohen-addad17a, title = {{Online Optimization of Smoothed Piecewise Constant Functions}}, author = {Cohen-Addad, Vincent and Kanade, Varun}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {412--420}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/cohen-addad17a/cohen-addad17a.pdf}, url = {https://proceedings.mlr.press/v54/cohen-addad17a.html}, abstract = {We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. This is with good reason-any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)-in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings.} }
Endnote
%0 Conference Paper %T Online Optimization of Smoothed Piecewise Constant Functions %A Vincent Cohen-Addad %A Varun Kanade %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-cohen-addad17a %I PMLR %P 412--420 %U https://proceedings.mlr.press/v54/cohen-addad17a.html %V 54 %X We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. This is with good reason-any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)-in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings.
APA
Cohen-Addad, V. & Kanade, V.. (2017). Online Optimization of Smoothed Piecewise Constant Functions. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:412-420 Available from https://proceedings.mlr.press/v54/cohen-addad17a.html.

Related Material