Sequential Counterfactual Risk Minimization

Houssam Zenati, Eustache Diemert, Matthieu Martin, Julien Mairal, Pierre Gaillard
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:40681-40706, 2023.

Abstract

Counterfactual Risk Minimization (CRM) is a framework for dealing with the logged bandit feedback problem, where the goal is to improve a logging policy using offline data. In this paper, we explore the case where it is possible to deploy learned policies multiple times and acquire new data. We extend the CRM principle and its theory to this scenario, which we call "Sequential Counterfactual Risk Minimization (SCRM)." We introduce a novel counterfactual estimator and identify conditions that can improve the performance of CRM in terms of excess risk and regret rates, by using an analysis similar to restart strategies in accelerated optimization methods. We also provide an empirical evaluation of our method in both discrete and continuous action settings, and demonstrate the benefits of multiple deployments of CRM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zenati23a, title = {Sequential Counterfactual Risk Minimization}, author = {Zenati, Houssam and Diemert, Eustache and Martin, Matthieu and Mairal, Julien and Gaillard, Pierre}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {40681--40706}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zenati23a/zenati23a.pdf}, url = {https://proceedings.mlr.press/v202/zenati23a.html}, abstract = {Counterfactual Risk Minimization (CRM) is a framework for dealing with the logged bandit feedback problem, where the goal is to improve a logging policy using offline data. In this paper, we explore the case where it is possible to deploy learned policies multiple times and acquire new data. We extend the CRM principle and its theory to this scenario, which we call "Sequential Counterfactual Risk Minimization (SCRM)." We introduce a novel counterfactual estimator and identify conditions that can improve the performance of CRM in terms of excess risk and regret rates, by using an analysis similar to restart strategies in accelerated optimization methods. We also provide an empirical evaluation of our method in both discrete and continuous action settings, and demonstrate the benefits of multiple deployments of CRM.} }
Endnote
%0 Conference Paper %T Sequential Counterfactual Risk Minimization %A Houssam Zenati %A Eustache Diemert %A Matthieu Martin %A Julien Mairal %A Pierre Gaillard %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zenati23a %I PMLR %P 40681--40706 %U https://proceedings.mlr.press/v202/zenati23a.html %V 202 %X Counterfactual Risk Minimization (CRM) is a framework for dealing with the logged bandit feedback problem, where the goal is to improve a logging policy using offline data. In this paper, we explore the case where it is possible to deploy learned policies multiple times and acquire new data. We extend the CRM principle and its theory to this scenario, which we call "Sequential Counterfactual Risk Minimization (SCRM)." We introduce a novel counterfactual estimator and identify conditions that can improve the performance of CRM in terms of excess risk and regret rates, by using an analysis similar to restart strategies in accelerated optimization methods. We also provide an empirical evaluation of our method in both discrete and continuous action settings, and demonstrate the benefits of multiple deployments of CRM.
APA
Zenati, H., Diemert, E., Martin, M., Mairal, J. & Gaillard, P.. (2023). Sequential Counterfactual Risk Minimization. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:40681-40706 Available from https://proceedings.mlr.press/v202/zenati23a.html.

Related Material