Conformal Off-Policy Prediction

Yingying Zhang, Chengchun Shi, Shikai Luo
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:2751-2768, 2023.

Abstract

Off-policy evaluation is critical in a number of applications where new policies need to be evaluated offline before online deployment. Most existing methods focus on the expected return, define the target parameter through averaging and provide a point estimator only. In this paper, we develop a novel procedure to produce reliable interval estimators for a target policy’s return starting from any initial state. Our proposal accounts for the variability of the return around its expectation, focuses on the individual effect and offers valid uncertainty quantification. Our main idea lies in designing a pseudo policy that generates subsamples as if they were sampled from the target policy so that existing conformal prediction algorithms are applicable to prediction interval construction. Our methods are justified by theories, synthetic data and real data from short-video platforms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-zhang23c, title = {Conformal Off-Policy Prediction}, author = {Zhang, Yingying and Shi, Chengchun and Luo, Shikai}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {2751--2768}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/zhang23c/zhang23c.pdf}, url = {https://proceedings.mlr.press/v206/zhang23c.html}, abstract = {Off-policy evaluation is critical in a number of applications where new policies need to be evaluated offline before online deployment. Most existing methods focus on the expected return, define the target parameter through averaging and provide a point estimator only. In this paper, we develop a novel procedure to produce reliable interval estimators for a target policy’s return starting from any initial state. Our proposal accounts for the variability of the return around its expectation, focuses on the individual effect and offers valid uncertainty quantification. Our main idea lies in designing a pseudo policy that generates subsamples as if they were sampled from the target policy so that existing conformal prediction algorithms are applicable to prediction interval construction. Our methods are justified by theories, synthetic data and real data from short-video platforms.} }
Endnote
%0 Conference Paper %T Conformal Off-Policy Prediction %A Yingying Zhang %A Chengchun Shi %A Shikai Luo %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-zhang23c %I PMLR %P 2751--2768 %U https://proceedings.mlr.press/v206/zhang23c.html %V 206 %X Off-policy evaluation is critical in a number of applications where new policies need to be evaluated offline before online deployment. Most existing methods focus on the expected return, define the target parameter through averaging and provide a point estimator only. In this paper, we develop a novel procedure to produce reliable interval estimators for a target policy’s return starting from any initial state. Our proposal accounts for the variability of the return around its expectation, focuses on the individual effect and offers valid uncertainty quantification. Our main idea lies in designing a pseudo policy that generates subsamples as if they were sampled from the target policy so that existing conformal prediction algorithms are applicable to prediction interval construction. Our methods are justified by theories, synthetic data and real data from short-video platforms.
APA
Zhang, Y., Shi, C. & Luo, S.. (2023). Conformal Off-Policy Prediction. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:2751-2768 Available from https://proceedings.mlr.press/v206/zhang23c.html.

Related Material