Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics

Michael Herman, Tobias Gindele, Jörg Wagner, Felix Schmitt, Wolfram Burgard
; Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:102-110, 2016.

Abstract

Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-herman16, title = {Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics}, author = {Michael Herman and Tobias Gindele and Jörg Wagner and Felix Schmitt and Wolfram Burgard}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {102--110}, year = {2016}, editor = {Arthur Gretton and Christian C. Robert}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/herman16.pdf}, url = {http://proceedings.mlr.press/v51/herman16.html}, abstract = {Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.} }
Endnote
%0 Conference Paper %T Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics %A Michael Herman %A Tobias Gindele %A Jörg Wagner %A Felix Schmitt %A Wolfram Burgard %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-herman16 %I PMLR %J Proceedings of Machine Learning Research %P 102--110 %U http://proceedings.mlr.press %V 51 %W PMLR %X Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.
RIS
TY - CPAPER TI - Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics AU - Michael Herman AU - Tobias Gindele AU - Jörg Wagner AU - Felix Schmitt AU - Wolfram Burgard BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics PY - 2016/05/02 DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-herman16 PB - PMLR SP - 102 DP - PMLR EP - 110 L1 - http://proceedings.mlr.press/v51/herman16.pdf UR - http://proceedings.mlr.press/v51/herman16.html AB - Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models. ER -
APA
Herman, M., Gindele, T., Wagner, J., Schmitt, F. & Burgard, W.. (2016). Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in PMLR 51:102-110

Related Material