Importance-Weighted Offline Learning Done Right

Germano Gabbianelli, Gergely Neu, Matteo Papini
Proceedings of The 35th International Conference on Algorithmic Learning Theory, PMLR 237:614-634, 2024.

Abstract

We study the problem of offline policy optimization in stochastic contextual bandit problems, where the goal is to learn a near-optimal policy based on a dataset of decision data collected by a suboptimal behavior policy. Rather than making any structural assumptions on the reward function, we assume access to a given policy class and aim to compete with the best comparator policy within this class. In this setting, a standard approach is to compute importance-weighted estimators of the value of each policy, and select a policy that minimizes the estimated value up to a “pessimistic” adjustment subtracted from the estimates to reduce their random fluctuations. In this paper, we show that a simple alternative approach based on the “implicit exploration” estimator of \citet{Neu2015} yields performance guarantees that are superior in nearly all possible terms to all previous results. Most notably, we remove an extremely restrictive “uniform coverage” assumption made in all previous works. These improvements are made possible by the observation that the upper and lower tails importance-weighted estimators behave very differently from each other, and their careful control can massively improve on previous results that were all based on symmetric two-sided concentration inequalities. We also extend our results to infinite policy classes in a PAC-Bayesian fashion, and showcase the robustness of our algorithm to the choice of hyper-parameters by means of numerical simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v237-gabbianelli24a, title = {Importance-Weighted Offline Learning Done Right}, author = {Gabbianelli, Germano and Neu, Gergely and Papini, Matteo}, booktitle = {Proceedings of The 35th International Conference on Algorithmic Learning Theory}, pages = {614--634}, year = {2024}, editor = {Vernade, Claire and Hsu, Daniel}, volume = {237}, series = {Proceedings of Machine Learning Research}, month = {25--28 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v237/gabbianelli24a/gabbianelli24a.pdf}, url = {https://proceedings.mlr.press/v237/gabbianelli24a.html}, abstract = {We study the problem of offline policy optimization in stochastic contextual bandit problems, where the goal is to learn a near-optimal policy based on a dataset of decision data collected by a suboptimal behavior policy. Rather than making any structural assumptions on the reward function, we assume access to a given policy class and aim to compete with the best comparator policy within this class. In this setting, a standard approach is to compute importance-weighted estimators of the value of each policy, and select a policy that minimizes the estimated value up to a “pessimistic” adjustment subtracted from the estimates to reduce their random fluctuations. In this paper, we show that a simple alternative approach based on the “implicit exploration” estimator of \citet{Neu2015} yields performance guarantees that are superior in nearly all possible terms to all previous results. Most notably, we remove an extremely restrictive “uniform coverage” assumption made in all previous works. These improvements are made possible by the observation that the upper and lower tails importance-weighted estimators behave very differently from each other, and their careful control can massively improve on previous results that were all based on symmetric two-sided concentration inequalities. We also extend our results to infinite policy classes in a PAC-Bayesian fashion, and showcase the robustness of our algorithm to the choice of hyper-parameters by means of numerical simulations.} }
Endnote
%0 Conference Paper %T Importance-Weighted Offline Learning Done Right %A Germano Gabbianelli %A Gergely Neu %A Matteo Papini %B Proceedings of The 35th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Claire Vernade %E Daniel Hsu %F pmlr-v237-gabbianelli24a %I PMLR %P 614--634 %U https://proceedings.mlr.press/v237/gabbianelli24a.html %V 237 %X We study the problem of offline policy optimization in stochastic contextual bandit problems, where the goal is to learn a near-optimal policy based on a dataset of decision data collected by a suboptimal behavior policy. Rather than making any structural assumptions on the reward function, we assume access to a given policy class and aim to compete with the best comparator policy within this class. In this setting, a standard approach is to compute importance-weighted estimators of the value of each policy, and select a policy that minimizes the estimated value up to a “pessimistic” adjustment subtracted from the estimates to reduce their random fluctuations. In this paper, we show that a simple alternative approach based on the “implicit exploration” estimator of \citet{Neu2015} yields performance guarantees that are superior in nearly all possible terms to all previous results. Most notably, we remove an extremely restrictive “uniform coverage” assumption made in all previous works. These improvements are made possible by the observation that the upper and lower tails importance-weighted estimators behave very differently from each other, and their careful control can massively improve on previous results that were all based on symmetric two-sided concentration inequalities. We also extend our results to infinite policy classes in a PAC-Bayesian fashion, and showcase the robustness of our algorithm to the choice of hyper-parameters by means of numerical simulations.
APA
Gabbianelli, G., Neu, G. & Papini, M.. (2024). Importance-Weighted Offline Learning Done Right. Proceedings of The 35th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 237:614-634 Available from https://proceedings.mlr.press/v237/gabbianelli24a.html.

Related Material