[edit]
Adaptive Estimator Selection for Off-Policy Evaluation
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9196-9205, 2020.
Abstract
We develop a generic data-driven method for estimator selection in off-policy policy evaluation settings. We establish a strong performance guarantee for the method, showing that it is competitive with the oracle estimator, up to a constant factor. Via in-depth case studies in contextual bandits and reinforcement learning, we demonstrate the generality and applicability of the method. We also perform comprehensive experiments, demonstrating the empirical efficacy of our approach and comparing with related approaches. In both case studies, our method compares favorably with existing methods.