Oracle-Efficient Pessimism: Offline Policy Optimization In Contextual Bandits

Lequn Wang, Akshay Krishnamurthy, Alex Slivkins
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:766-774, 2024.

Abstract

We consider offline policy optimization (OPO) in contextual bandits, where one is given a fixed dataset of logged interactions. While pessimistic regularizers are typically used to mitigate distribution shift, prior implementations thereof are either specialized or computationally inefficient. We present the first \emph{general} oracle-efficient algorithm for pessimistic OPO: it reduces to supervised learning, leading to broad applicability. We obtain statistical guarantees analogous to those for prior pessimistic approaches. We instantiate our approach for both discrete and continuous actions and perform experiments in both settings, showing advantage over unregularized OPO across a wide range of configurations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-wang24a, title = { Oracle-Efficient Pessimism: Offline Policy Optimization In Contextual Bandits }, author = {Wang, Lequn and Krishnamurthy, Akshay and Slivkins, Alex}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {766--774}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/wang24a/wang24a.pdf}, url = {https://proceedings.mlr.press/v238/wang24a.html}, abstract = { We consider offline policy optimization (OPO) in contextual bandits, where one is given a fixed dataset of logged interactions. While pessimistic regularizers are typically used to mitigate distribution shift, prior implementations thereof are either specialized or computationally inefficient. We present the first \emph{general} oracle-efficient algorithm for pessimistic OPO: it reduces to supervised learning, leading to broad applicability. We obtain statistical guarantees analogous to those for prior pessimistic approaches. We instantiate our approach for both discrete and continuous actions and perform experiments in both settings, showing advantage over unregularized OPO across a wide range of configurations. } }
Endnote
%0 Conference Paper %T Oracle-Efficient Pessimism: Offline Policy Optimization In Contextual Bandits %A Lequn Wang %A Akshay Krishnamurthy %A Alex Slivkins %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-wang24a %I PMLR %P 766--774 %U https://proceedings.mlr.press/v238/wang24a.html %V 238 %X We consider offline policy optimization (OPO) in contextual bandits, where one is given a fixed dataset of logged interactions. While pessimistic regularizers are typically used to mitigate distribution shift, prior implementations thereof are either specialized or computationally inefficient. We present the first \emph{general} oracle-efficient algorithm for pessimistic OPO: it reduces to supervised learning, leading to broad applicability. We obtain statistical guarantees analogous to those for prior pessimistic approaches. We instantiate our approach for both discrete and continuous actions and perform experiments in both settings, showing advantage over unregularized OPO across a wide range of configurations.
APA
Wang, L., Krishnamurthy, A. & Slivkins, A.. (2024). Oracle-Efficient Pessimism: Offline Policy Optimization In Contextual Bandits . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:766-774 Available from https://proceedings.mlr.press/v238/wang24a.html.

Related Material