f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Ben Eysenbach
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:529-551, 2021.

Abstract

Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. Based on the derived gradient, we present an algorithm, f-IRL, that recovers a stationary reward function from the expert density by gradient descent. We show that f-IRL can learn behaviors from a hand-designed target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, we show that the recovered reward function can be used to quickly solve downstream tasks, and empirically demonstrate its utility on hard-to-explore tasks and for behavior transfer across changes in dynamics. Project videos and code link are available at https://sites.google.com/view/f-irl/home.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-ni21a, title = {f-IRL: Inverse Reinforcement Learning via State Marginal Matching}, author = {Ni, Tianwei and Sikchi, Harshit and Wang, Yufei and Gupta, Tejus and Lee, Lisa and Eysenbach, Ben}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {529--551}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/ni21a/ni21a.pdf}, url = {https://proceedings.mlr.press/v155/ni21a.html}, abstract = {Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. Based on the derived gradient, we present an algorithm, f-IRL, that recovers a stationary reward function from the expert density by gradient descent. We show that f-IRL can learn behaviors from a hand-designed target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, we show that the recovered reward function can be used to quickly solve downstream tasks, and empirically demonstrate its utility on hard-to-explore tasks and for behavior transfer across changes in dynamics. Project videos and code link are available at https://sites.google.com/view/f-irl/home.} }
Endnote
%0 Conference Paper %T f-IRL: Inverse Reinforcement Learning via State Marginal Matching %A Tianwei Ni %A Harshit Sikchi %A Yufei Wang %A Tejus Gupta %A Lisa Lee %A Ben Eysenbach %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-ni21a %I PMLR %P 529--551 %U https://proceedings.mlr.press/v155/ni21a.html %V 155 %X Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. Based on the derived gradient, we present an algorithm, f-IRL, that recovers a stationary reward function from the expert density by gradient descent. We show that f-IRL can learn behaviors from a hand-designed target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, we show that the recovered reward function can be used to quickly solve downstream tasks, and empirically demonstrate its utility on hard-to-explore tasks and for behavior transfer across changes in dynamics. Project videos and code link are available at https://sites.google.com/view/f-irl/home.
APA
Ni, T., Sikchi, H., Wang, Y., Gupta, T., Lee, L. & Eysenbach, B.. (2021). f-IRL: Inverse Reinforcement Learning via State Marginal Matching. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:529-551 Available from https://proceedings.mlr.press/v155/ni21a.html.

Related Material