Discriminator-Guided Model-Based Offline Imitation Learning

Wenjia Zhang, Haoran Xu, Haoyi Niu, Peng Cheng, Ming Li, Heming Zhang, Guyue Zhou, Xianyuan Zhan
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1266-1276, 2023.

Abstract

Offline imitation learning (IL) is a powerful method to solve decision-making problems from expert demonstrations without reward labels. Existing offline IL methods suffer from severe performance degeneration under limited expert data. Including a learned dynamics model can potentially improve the state-action space coverage of expert data, however, it also faces challenging issues like model approximation/generalization errors and suboptimality of rollout data. In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and sub-optimality of model rollout data against real expert demonstrations. DMIL adopts a novel cooperative-yet-adversarial learning strategy, which uses the discriminator to guide and couple the learning process of the policy and dynamics model, resulting in improved model performance and robustness. Our framework can also be extended to the case when demonstrations contain a large proportion of suboptimal data. Experimental results show that DMIL and its extension achieve superior performance and robustness compared to state-of-the-art offline IL methods under small datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-zhang23c, title = {Discriminator-Guided Model-Based Offline Imitation Learning}, author = {Zhang, Wenjia and Xu, Haoran and Niu, Haoyi and Cheng, Peng and Li, Ming and Zhang, Heming and Zhou, Guyue and Zhan, Xianyuan}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1266--1276}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/zhang23c/zhang23c.pdf}, url = {https://proceedings.mlr.press/v205/zhang23c.html}, abstract = {Offline imitation learning (IL) is a powerful method to solve decision-making problems from expert demonstrations without reward labels. Existing offline IL methods suffer from severe performance degeneration under limited expert data. Including a learned dynamics model can potentially improve the state-action space coverage of expert data, however, it also faces challenging issues like model approximation/generalization errors and suboptimality of rollout data. In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and sub-optimality of model rollout data against real expert demonstrations. DMIL adopts a novel cooperative-yet-adversarial learning strategy, which uses the discriminator to guide and couple the learning process of the policy and dynamics model, resulting in improved model performance and robustness. Our framework can also be extended to the case when demonstrations contain a large proportion of suboptimal data. Experimental results show that DMIL and its extension achieve superior performance and robustness compared to state-of-the-art offline IL methods under small datasets.} }
Endnote
%0 Conference Paper %T Discriminator-Guided Model-Based Offline Imitation Learning %A Wenjia Zhang %A Haoran Xu %A Haoyi Niu %A Peng Cheng %A Ming Li %A Heming Zhang %A Guyue Zhou %A Xianyuan Zhan %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-zhang23c %I PMLR %P 1266--1276 %U https://proceedings.mlr.press/v205/zhang23c.html %V 205 %X Offline imitation learning (IL) is a powerful method to solve decision-making problems from expert demonstrations without reward labels. Existing offline IL methods suffer from severe performance degeneration under limited expert data. Including a learned dynamics model can potentially improve the state-action space coverage of expert data, however, it also faces challenging issues like model approximation/generalization errors and suboptimality of rollout data. In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and sub-optimality of model rollout data against real expert demonstrations. DMIL adopts a novel cooperative-yet-adversarial learning strategy, which uses the discriminator to guide and couple the learning process of the policy and dynamics model, resulting in improved model performance and robustness. Our framework can also be extended to the case when demonstrations contain a large proportion of suboptimal data. Experimental results show that DMIL and its extension achieve superior performance and robustness compared to state-of-the-art offline IL methods under small datasets.
APA
Zhang, W., Xu, H., Niu, H., Cheng, P., Li, M., Zhang, H., Zhou, G. & Zhan, X.. (2023). Discriminator-Guided Model-Based Offline Imitation Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1266-1276 Available from https://proceedings.mlr.press/v205/zhang23c.html.

Related Material