Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions

Sinong Geng, Houssam Nassif, Carlos Manzanares, Max Reppen, Ronnie Sircar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3431-3441, 2020.

Abstract

We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the Q-function, and the Reward function by deep learning. PQR does not assume that the reward solely depends on the state, instead it allows for a dependency on the choice of action. Moreover, PQR allows for stochastic state transitions. To accomplish this, we assume the existence of one anchor action whose reward is known, typically the action of doing nothing, yielding no reward. We present both estimators and algorithms for the PQR method. When the environment transition is known, we prove that the PQR reward estimator uniquely recovers the true reward. With unknown transitions, we bound the estimation error of PQR. Finally, the performance of PQR is demonstrated by synthetic and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-geng20a, title = {Deep {PQR}: Solving Inverse Reinforcement Learning using Anchor Actions}, author = {Geng, Sinong and Nassif, Houssam and Manzanares, Carlos and Reppen, Max and Sircar, Ronnie}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3431--3441}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/geng20a/geng20a.pdf}, url = {https://proceedings.mlr.press/v119/geng20a.html}, abstract = {We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the Q-function, and the Reward function by deep learning. PQR does not assume that the reward solely depends on the state, instead it allows for a dependency on the choice of action. Moreover, PQR allows for stochastic state transitions. To accomplish this, we assume the existence of one anchor action whose reward is known, typically the action of doing nothing, yielding no reward. We present both estimators and algorithms for the PQR method. When the environment transition is known, we prove that the PQR reward estimator uniquely recovers the true reward. With unknown transitions, we bound the estimation error of PQR. Finally, the performance of PQR is demonstrated by synthetic and real-world datasets.} }
Endnote
%0 Conference Paper %T Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions %A Sinong Geng %A Houssam Nassif %A Carlos Manzanares %A Max Reppen %A Ronnie Sircar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-geng20a %I PMLR %P 3431--3441 %U https://proceedings.mlr.press/v119/geng20a.html %V 119 %X We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the Q-function, and the Reward function by deep learning. PQR does not assume that the reward solely depends on the state, instead it allows for a dependency on the choice of action. Moreover, PQR allows for stochastic state transitions. To accomplish this, we assume the existence of one anchor action whose reward is known, typically the action of doing nothing, yielding no reward. We present both estimators and algorithms for the PQR method. When the environment transition is known, we prove that the PQR reward estimator uniquely recovers the true reward. With unknown transitions, we bound the estimation error of PQR. Finally, the performance of PQR is demonstrated by synthetic and real-world datasets.
APA
Geng, S., Nassif, H., Manzanares, C., Reppen, M. & Sircar, R.. (2020). Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3431-3441 Available from https://proceedings.mlr.press/v119/geng20a.html.

Related Material