Balanced Off-Policy Evaluation in General Action Spaces

Arjun Sondhi, David Arbour, Drew Dimmery
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2413-2423, 2020.

Abstract

Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance—a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution. In turn, this is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-sondhi20a, title = {Balanced Off-Policy Evaluation in General Action Spaces}, author = {Sondhi, Arjun and Arbour, David and Dimmery, Drew}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2413--2423}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/sondhi20a/sondhi20a.pdf}, url = {https://proceedings.mlr.press/v108/sondhi20a.html}, abstract = {Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance—a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution. In turn, this is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.} }
Endnote
%0 Conference Paper %T Balanced Off-Policy Evaluation in General Action Spaces %A Arjun Sondhi %A David Arbour %A Drew Dimmery %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-sondhi20a %I PMLR %P 2413--2423 %U https://proceedings.mlr.press/v108/sondhi20a.html %V 108 %X Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance—a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution. In turn, this is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.
APA
Sondhi, A., Arbour, D. & Dimmery, D.. (2020). Balanced Off-Policy Evaluation in General Action Spaces. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2413-2423 Available from https://proceedings.mlr.press/v108/sondhi20a.html.

Related Material