Confident Off-Policy Evaluation and Selection through Self-Normalized Importance Weighting

Ilja Kuzborskij, Claire Vernade, Andras Gyorgy, Csaba Szepesvari
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:640-648, 2021.

Abstract

We consider off-policy evaluation in the contextual bandit setting for the purpose of obtaining a robust off-policy selection strategy, where the selection strategy is evaluated based on the value of the chosen policy in a set of proposal (target) policies. We propose a new method to compute a lower bound on the value of an arbitrary target policy given some logged data in contextual bandits for a desired coverage. The lower bound is built around the so-called Self-normalized Importance Weighting (SN) estimator. It combines the use of a semi-empirical Efron-Stein tail inequality to control the concentration and Harris’ inequality to control the bias. The new approach is evaluated on a number of synthetic and real datasets and is found to be superior to its main competitors, both in terms of tightness of the confidence intervals and the quality of the policies chosen.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-kuzborskij21a, title = { Confident Off-Policy Evaluation and Selection through Self-Normalized Importance Weighting }, author = {Kuzborskij, Ilja and Vernade, Claire and Gyorgy, Andras and Szepesvari, Csaba}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {640--648}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/kuzborskij21a/kuzborskij21a.pdf}, url = {https://proceedings.mlr.press/v130/kuzborskij21a.html}, abstract = { We consider off-policy evaluation in the contextual bandit setting for the purpose of obtaining a robust off-policy selection strategy, where the selection strategy is evaluated based on the value of the chosen policy in a set of proposal (target) policies. We propose a new method to compute a lower bound on the value of an arbitrary target policy given some logged data in contextual bandits for a desired coverage. The lower bound is built around the so-called Self-normalized Importance Weighting (SN) estimator. It combines the use of a semi-empirical Efron-Stein tail inequality to control the concentration and Harris’ inequality to control the bias. The new approach is evaluated on a number of synthetic and real datasets and is found to be superior to its main competitors, both in terms of tightness of the confidence intervals and the quality of the policies chosen. } }
Endnote
%0 Conference Paper %T Confident Off-Policy Evaluation and Selection through Self-Normalized Importance Weighting %A Ilja Kuzborskij %A Claire Vernade %A Andras Gyorgy %A Csaba Szepesvari %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-kuzborskij21a %I PMLR %P 640--648 %U https://proceedings.mlr.press/v130/kuzborskij21a.html %V 130 %X We consider off-policy evaluation in the contextual bandit setting for the purpose of obtaining a robust off-policy selection strategy, where the selection strategy is evaluated based on the value of the chosen policy in a set of proposal (target) policies. We propose a new method to compute a lower bound on the value of an arbitrary target policy given some logged data in contextual bandits for a desired coverage. The lower bound is built around the so-called Self-normalized Importance Weighting (SN) estimator. It combines the use of a semi-empirical Efron-Stein tail inequality to control the concentration and Harris’ inequality to control the bias. The new approach is evaluated on a number of synthetic and real datasets and is found to be superior to its main competitors, both in terms of tightness of the confidence intervals and the quality of the policies chosen.
APA
Kuzborskij, I., Vernade, C., Gyorgy, A. & Szepesvari, C.. (2021). Confident Off-Policy Evaluation and Selection through Self-Normalized Importance Weighting . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:640-648 Available from https://proceedings.mlr.press/v130/kuzborskij21a.html.

Related Material