Doubly robust off-policy evaluation with shrinkage

Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, Miroslav Dudik
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9167-9176, 2020.

Abstract

We propose a new framework for designing estimators for off-policy evaluation in contextual bandits. Our approach is based on the asymptotically optimal doubly robust estimator, but we shrink the importance weights to minimize a bound on the mean squared error, which results in a better bias-variance tradeoff in finite samples. We use this optimization-based framework to obtain three estimators: (a) a weight-clipping estimator, (b) a new weight-shrinkage estimator, and (c) the first shrinkage-based estimator for combinatorial action sets. Extensive experiments in both standard and combinatorial bandit benchmark problems show that our estimators are highly adaptive and typically outperform state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-su20a, title = {Doubly robust off-policy evaluation with shrinkage}, author = {Su, Yi and Dimakopoulou, Maria and Krishnamurthy, Akshay and Dudik, Miroslav}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9167--9176}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/su20a/su20a.pdf}, url = {https://proceedings.mlr.press/v119/su20a.html}, abstract = {We propose a new framework for designing estimators for off-policy evaluation in contextual bandits. Our approach is based on the asymptotically optimal doubly robust estimator, but we shrink the importance weights to minimize a bound on the mean squared error, which results in a better bias-variance tradeoff in finite samples. We use this optimization-based framework to obtain three estimators: (a) a weight-clipping estimator, (b) a new weight-shrinkage estimator, and (c) the first shrinkage-based estimator for combinatorial action sets. Extensive experiments in both standard and combinatorial bandit benchmark problems show that our estimators are highly adaptive and typically outperform state-of-the-art methods.} }
Endnote
%0 Conference Paper %T Doubly robust off-policy evaluation with shrinkage %A Yi Su %A Maria Dimakopoulou %A Akshay Krishnamurthy %A Miroslav Dudik %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-su20a %I PMLR %P 9167--9176 %U https://proceedings.mlr.press/v119/su20a.html %V 119 %X We propose a new framework for designing estimators for off-policy evaluation in contextual bandits. Our approach is based on the asymptotically optimal doubly robust estimator, but we shrink the importance weights to minimize a bound on the mean squared error, which results in a better bias-variance tradeoff in finite samples. We use this optimization-based framework to obtain three estimators: (a) a weight-clipping estimator, (b) a new weight-shrinkage estimator, and (c) the first shrinkage-based estimator for combinatorial action sets. Extensive experiments in both standard and combinatorial bandit benchmark problems show that our estimators are highly adaptive and typically outperform state-of-the-art methods.
APA
Su, Y., Dimakopoulou, M., Krishnamurthy, A. & Dudik, M.. (2020). Doubly robust off-policy evaluation with shrinkage. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9167-9176 Available from https://proceedings.mlr.press/v119/su20a.html.

Related Material