Safe Optimal Design with Applications in Off-Policy Learning

Ruihao Zhu, Branislav Kveton
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2436-2447, 2022.

Abstract

Motivated by practical needs in online experimentation and off-policy learning, we study the problem of safe optimal design, where we develop a data logging policy that efficiently explores while achieving competitive rewards with a baseline production policy. We first show, perhaps surprisingly, that a common practice of mixing the production policy with uniform exploration, despite being safe, is sub-optimal in maximizing information gain. Then we propose a safe optimal logging policy for the case when no side information about the actions’ expected rewards is available. We improve upon this design by considering side information and also extend both approaches to a large number of actions with a linear reward model. We analyze how our data logging policies impact errors in off-policy learning. Finally, we empirically validate the benefit of our designs by conducting extensive experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-zhu22a, title = { Safe Optimal Design with Applications in Off-Policy Learning }, author = {Zhu, Ruihao and Kveton, Branislav}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2436--2447}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/zhu22a/zhu22a.pdf}, url = {https://proceedings.mlr.press/v151/zhu22a.html}, abstract = { Motivated by practical needs in online experimentation and off-policy learning, we study the problem of safe optimal design, where we develop a data logging policy that efficiently explores while achieving competitive rewards with a baseline production policy. We first show, perhaps surprisingly, that a common practice of mixing the production policy with uniform exploration, despite being safe, is sub-optimal in maximizing information gain. Then we propose a safe optimal logging policy for the case when no side information about the actions’ expected rewards is available. We improve upon this design by considering side information and also extend both approaches to a large number of actions with a linear reward model. We analyze how our data logging policies impact errors in off-policy learning. Finally, we empirically validate the benefit of our designs by conducting extensive experiments. } }
Endnote
%0 Conference Paper %T Safe Optimal Design with Applications in Off-Policy Learning %A Ruihao Zhu %A Branislav Kveton %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-zhu22a %I PMLR %P 2436--2447 %U https://proceedings.mlr.press/v151/zhu22a.html %V 151 %X Motivated by practical needs in online experimentation and off-policy learning, we study the problem of safe optimal design, where we develop a data logging policy that efficiently explores while achieving competitive rewards with a baseline production policy. We first show, perhaps surprisingly, that a common practice of mixing the production policy with uniform exploration, despite being safe, is sub-optimal in maximizing information gain. Then we propose a safe optimal logging policy for the case when no side information about the actions’ expected rewards is available. We improve upon this design by considering side information and also extend both approaches to a large number of actions with a linear reward model. We analyze how our data logging policies impact errors in off-policy learning. Finally, we empirically validate the benefit of our designs by conducting extensive experiments.
APA
Zhu, R. & Kveton, B.. (2022). Safe Optimal Design with Applications in Off-Policy Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2436-2447 Available from https://proceedings.mlr.press/v151/zhu22a.html.

Related Material