Learning to Stop with Surprisingly Few Samples

Daniel Russo, Assaf Zeevi, Tianyi Zhang
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:3887-3888, 2021.

Abstract

We consider a discounted infinite horizon optimal stopping problem. If the underlying distribution is known a priori, the solution of this problem is obtained via dynamic programming (DP) and is given by a well known threshold rule. When information on this distribution is lacking, a natural (though naive) approach is “explore-then-exploit," whereby the unknown distribution or its parameters are estimated over an initial exploration phase, and this estimate is then used in the DP to determine actions over the residual exploitation phase. We show: (i) with proper tuning, this approach leads to performance comparable to the full information DP solution; and (ii) despite common wisdom on the sensitivity of such “plug in" approaches in DP due to propagation of estimation errors, a surprisingly “short" (logarithmic in the horizon) exploration horizon suffices to obtain said performance. In cases where the underlying distribution is heavy-tailed, these observations are even more pronounced: a single sample exploration phase suffices.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-russo21a, title = {Learning to Stop with Surprisingly Few Samples}, author = {Russo, Daniel and Zeevi, Assaf and Zhang, Tianyi}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {3887--3888}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/russo21a/russo21a.pdf}, url = {https://proceedings.mlr.press/v134/russo21a.html}, abstract = {We consider a discounted infinite horizon optimal stopping problem. If the underlying distribution is known a priori, the solution of this problem is obtained via dynamic programming (DP) and is given by a well known threshold rule. When information on this distribution is lacking, a natural (though naive) approach is “explore-then-exploit," whereby the unknown distribution or its parameters are estimated over an initial exploration phase, and this estimate is then used in the DP to determine actions over the residual exploitation phase. We show: (i) with proper tuning, this approach leads to performance comparable to the full information DP solution; and (ii) despite common wisdom on the sensitivity of such “plug in" approaches in DP due to propagation of estimation errors, a surprisingly “short" (logarithmic in the horizon) exploration horizon suffices to obtain said performance. In cases where the underlying distribution is heavy-tailed, these observations are even more pronounced: a single sample exploration phase suffices.} }
Endnote
%0 Conference Paper %T Learning to Stop with Surprisingly Few Samples %A Daniel Russo %A Assaf Zeevi %A Tianyi Zhang %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-russo21a %I PMLR %P 3887--3888 %U https://proceedings.mlr.press/v134/russo21a.html %V 134 %X We consider a discounted infinite horizon optimal stopping problem. If the underlying distribution is known a priori, the solution of this problem is obtained via dynamic programming (DP) and is given by a well known threshold rule. When information on this distribution is lacking, a natural (though naive) approach is “explore-then-exploit," whereby the unknown distribution or its parameters are estimated over an initial exploration phase, and this estimate is then used in the DP to determine actions over the residual exploitation phase. We show: (i) with proper tuning, this approach leads to performance comparable to the full information DP solution; and (ii) despite common wisdom on the sensitivity of such “plug in" approaches in DP due to propagation of estimation errors, a surprisingly “short" (logarithmic in the horizon) exploration horizon suffices to obtain said performance. In cases where the underlying distribution is heavy-tailed, these observations are even more pronounced: a single sample exploration phase suffices.
APA
Russo, D., Zeevi, A. & Zhang, T.. (2021). Learning to Stop with Surprisingly Few Samples. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:3887-3888 Available from https://proceedings.mlr.press/v134/russo21a.html.

Related Material