Combining parametric and nonparametric models for off-policy evaluation

Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, Finale Doshi-Velez
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2366-2375, 2019.

Abstract

We consider a model-based approach to perform batch off-policy evaluation in reinforcement learning. Our method takes a mixture-of-experts approach to combine parametric and non-parametric models of the environment such that the final value estimate has the least expected error. We do so by first estimating the local accuracy of each model and then using a planner to select which model to use at every time step as to minimize the return error estimate along entire trajectories. Across a variety of domains, our mixture-based approach outperforms the individual models alone as well as state-of-the-art importance sampling-based estimators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-gottesman19a, title = {Combining parametric and nonparametric models for off-policy evaluation}, author = {Gottesman, Omer and Liu, Yao and Sussex, Scott and Brunskill, Emma and Doshi-Velez, Finale}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2366--2375}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/gottesman19a/gottesman19a.pdf}, url = {https://proceedings.mlr.press/v97/gottesman19a.html}, abstract = {We consider a model-based approach to perform batch off-policy evaluation in reinforcement learning. Our method takes a mixture-of-experts approach to combine parametric and non-parametric models of the environment such that the final value estimate has the least expected error. We do so by first estimating the local accuracy of each model and then using a planner to select which model to use at every time step as to minimize the return error estimate along entire trajectories. Across a variety of domains, our mixture-based approach outperforms the individual models alone as well as state-of-the-art importance sampling-based estimators.} }
Endnote
%0 Conference Paper %T Combining parametric and nonparametric models for off-policy evaluation %A Omer Gottesman %A Yao Liu %A Scott Sussex %A Emma Brunskill %A Finale Doshi-Velez %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-gottesman19a %I PMLR %P 2366--2375 %U https://proceedings.mlr.press/v97/gottesman19a.html %V 97 %X We consider a model-based approach to perform batch off-policy evaluation in reinforcement learning. Our method takes a mixture-of-experts approach to combine parametric and non-parametric models of the environment such that the final value estimate has the least expected error. We do so by first estimating the local accuracy of each model and then using a planner to select which model to use at every time step as to minimize the return error estimate along entire trajectories. Across a variety of domains, our mixture-based approach outperforms the individual models alone as well as state-of-the-art importance sampling-based estimators.
APA
Gottesman, O., Liu, Y., Sussex, S., Brunskill, E. & Doshi-Velez, F.. (2019). Combining parametric and nonparametric models for off-policy evaluation. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2366-2375 Available from https://proceedings.mlr.press/v97/gottesman19a.html.

Related Material