Learning the Reward Function for a Misspecified Model

Erik Talvitie
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4838-4847, 2018.

Abstract

In model-based reinforcement learning it is typical to decouple the problems of learning the dynamics model and learning the reward function. However, when the dynamics model is flawed, it may generate erroneous states that would never occur in the true environment. It is not clear a priori what value the reward function should assign to such states. This paper presents a novel error bound that accounts for the reward model’s behavior in states sampled from the model. This bound is used to extend the existing Hallucinated DAgger-MC algorithm, which offers theoretical performance guarantees in deterministic MDPs that do not assume a perfect model can be learned. Empirically, this approach to reward learning can yield dramatic improvements in control performance when the dynamics model is flawed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-talvitie18a, title = {Learning the Reward Function for a Misspecified Model}, author = {Talvitie, Erik}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4838--4847}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/talvitie18a/talvitie18a.pdf}, url = {https://proceedings.mlr.press/v80/talvitie18a.html}, abstract = {In model-based reinforcement learning it is typical to decouple the problems of learning the dynamics model and learning the reward function. However, when the dynamics model is flawed, it may generate erroneous states that would never occur in the true environment. It is not clear a priori what value the reward function should assign to such states. This paper presents a novel error bound that accounts for the reward model’s behavior in states sampled from the model. This bound is used to extend the existing Hallucinated DAgger-MC algorithm, which offers theoretical performance guarantees in deterministic MDPs that do not assume a perfect model can be learned. Empirically, this approach to reward learning can yield dramatic improvements in control performance when the dynamics model is flawed.} }
Endnote
%0 Conference Paper %T Learning the Reward Function for a Misspecified Model %A Erik Talvitie %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-talvitie18a %I PMLR %P 4838--4847 %U https://proceedings.mlr.press/v80/talvitie18a.html %V 80 %X In model-based reinforcement learning it is typical to decouple the problems of learning the dynamics model and learning the reward function. However, when the dynamics model is flawed, it may generate erroneous states that would never occur in the true environment. It is not clear a priori what value the reward function should assign to such states. This paper presents a novel error bound that accounts for the reward model’s behavior in states sampled from the model. This bound is used to extend the existing Hallucinated DAgger-MC algorithm, which offers theoretical performance guarantees in deterministic MDPs that do not assume a perfect model can be learned. Empirically, this approach to reward learning can yield dramatic improvements in control performance when the dynamics model is flawed.
APA
Talvitie, E.. (2018). Learning the Reward Function for a Misspecified Model. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4838-4847 Available from https://proceedings.mlr.press/v80/talvitie18a.html.

Related Material