Iterative Regularized Policy Optimization with Imperfect Demonstrations

Gong Xudong, Feng Dawei, Kele Xu, Yuanzhao Zhai, Chengkang Yao, Weijia Wang, Bo Ding, Huaimin Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:55547-55568, 2024.

Abstract

Imitation learning heavily relies on the quality of provided demonstrations. In scenarios where demonstrations are imperfect and rare, a prevalent approach for refining policies is through online fine-tuning with reinforcement learning, in which a Kullback–Leibler (KL) regularization is often employed to stabilize the learning process. However, our investigation reveals that on the one hand, imperfect demonstrations can bias the online learning process, the KL regularization will further constrain the improvement of online policy exploration. To address the above issues, we propose Iterative Regularized Policy Optimization (IRPO), a framework that involves iterative offline imitation learning and online reinforcement exploration. Specifically, the policy learned online is used to serve as the demonstrator for successive learning iterations, with a demonstration boosting to consistently enhance the quality of demonstrations. Experimental validations conducted across widely used benchmarks and a novel fixed-wing UAV control task consistently demonstrate the effectiveness of IRPO in improving both the demonstration quality and the policy performance. Our code is available at https://github.com/GongXudong/IRPO.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-xudong24a, title = {Iterative Regularized Policy Optimization with Imperfect Demonstrations}, author = {Xudong, Gong and Dawei, Feng and Xu, Kele and Zhai, Yuanzhao and Yao, Chengkang and Wang, Weijia and Ding, Bo and Wang, Huaimin}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {55547--55568}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/xudong24a/xudong24a.pdf}, url = {https://proceedings.mlr.press/v235/xudong24a.html}, abstract = {Imitation learning heavily relies on the quality of provided demonstrations. In scenarios where demonstrations are imperfect and rare, a prevalent approach for refining policies is through online fine-tuning with reinforcement learning, in which a Kullback–Leibler (KL) regularization is often employed to stabilize the learning process. However, our investigation reveals that on the one hand, imperfect demonstrations can bias the online learning process, the KL regularization will further constrain the improvement of online policy exploration. To address the above issues, we propose Iterative Regularized Policy Optimization (IRPO), a framework that involves iterative offline imitation learning and online reinforcement exploration. Specifically, the policy learned online is used to serve as the demonstrator for successive learning iterations, with a demonstration boosting to consistently enhance the quality of demonstrations. Experimental validations conducted across widely used benchmarks and a novel fixed-wing UAV control task consistently demonstrate the effectiveness of IRPO in improving both the demonstration quality and the policy performance. Our code is available at https://github.com/GongXudong/IRPO.} }
Endnote
%0 Conference Paper %T Iterative Regularized Policy Optimization with Imperfect Demonstrations %A Gong Xudong %A Feng Dawei %A Kele Xu %A Yuanzhao Zhai %A Chengkang Yao %A Weijia Wang %A Bo Ding %A Huaimin Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-xudong24a %I PMLR %P 55547--55568 %U https://proceedings.mlr.press/v235/xudong24a.html %V 235 %X Imitation learning heavily relies on the quality of provided demonstrations. In scenarios where demonstrations are imperfect and rare, a prevalent approach for refining policies is through online fine-tuning with reinforcement learning, in which a Kullback–Leibler (KL) regularization is often employed to stabilize the learning process. However, our investigation reveals that on the one hand, imperfect demonstrations can bias the online learning process, the KL regularization will further constrain the improvement of online policy exploration. To address the above issues, we propose Iterative Regularized Policy Optimization (IRPO), a framework that involves iterative offline imitation learning and online reinforcement exploration. Specifically, the policy learned online is used to serve as the demonstrator for successive learning iterations, with a demonstration boosting to consistently enhance the quality of demonstrations. Experimental validations conducted across widely used benchmarks and a novel fixed-wing UAV control task consistently demonstrate the effectiveness of IRPO in improving both the demonstration quality and the policy performance. Our code is available at https://github.com/GongXudong/IRPO.
APA
Xudong, G., Dawei, F., Xu, K., Zhai, Y., Yao, C., Wang, W., Ding, B. & Wang, H.. (2024). Iterative Regularized Policy Optimization with Imperfect Demonstrations. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:55547-55568 Available from https://proceedings.mlr.press/v235/xudong24a.html.

Related Material