OLLIE: Imitation Learning from Offline Pretraining to Online Finetuning

Sheng Yue, Xingyuan Hua, Ju Ren, Sen Lin, Junshan Zhang, Yaoxue Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:57966-58018, 2024.

Abstract

In this paper, we study offline-to-online Imitation Learning (IL) that pretrains an imitation policy from static demonstration data, followed by fast finetuning with minimal environmental interaction. We find the naive combination of existing offline IL and online IL methods tends to behave poorly in this context, because the initial discriminator (often used in online IL) operates randomly and discordantly against the policy initialization, leading to misguided policy optimization and unlearning of pretraining knowledge. To overcome this challenge, we propose a principled offline-to-online IL method, named OLLIE, that simultaneously learns a near-expert policy initialization along with an aligned discriminator initialization, which can be seamlessly integrated into online IL, achieving smooth and fast finetuning. Empirically, OLLIE consistently and significantly outperforms the baseline methods in 20 challenging tasks, from continuous control to vision-based domains, in terms of performance, demonstration efficiency, and convergence speed. This work may serve as a foundation for further exploration of pretraining and finetuning in the context of IL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-yue24a, title = {{OLLIE}: Imitation Learning from Offline Pretraining to Online Finetuning}, author = {Yue, Sheng and Hua, Xingyuan and Ren, Ju and Lin, Sen and Zhang, Junshan and Zhang, Yaoxue}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {57966--58018}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/yue24a/yue24a.pdf}, url = {https://proceedings.mlr.press/v235/yue24a.html}, abstract = {In this paper, we study offline-to-online Imitation Learning (IL) that pretrains an imitation policy from static demonstration data, followed by fast finetuning with minimal environmental interaction. We find the naive combination of existing offline IL and online IL methods tends to behave poorly in this context, because the initial discriminator (often used in online IL) operates randomly and discordantly against the policy initialization, leading to misguided policy optimization and unlearning of pretraining knowledge. To overcome this challenge, we propose a principled offline-to-online IL method, named OLLIE, that simultaneously learns a near-expert policy initialization along with an aligned discriminator initialization, which can be seamlessly integrated into online IL, achieving smooth and fast finetuning. Empirically, OLLIE consistently and significantly outperforms the baseline methods in 20 challenging tasks, from continuous control to vision-based domains, in terms of performance, demonstration efficiency, and convergence speed. This work may serve as a foundation for further exploration of pretraining and finetuning in the context of IL.} }
Endnote
%0 Conference Paper %T OLLIE: Imitation Learning from Offline Pretraining to Online Finetuning %A Sheng Yue %A Xingyuan Hua %A Ju Ren %A Sen Lin %A Junshan Zhang %A Yaoxue Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-yue24a %I PMLR %P 57966--58018 %U https://proceedings.mlr.press/v235/yue24a.html %V 235 %X In this paper, we study offline-to-online Imitation Learning (IL) that pretrains an imitation policy from static demonstration data, followed by fast finetuning with minimal environmental interaction. We find the naive combination of existing offline IL and online IL methods tends to behave poorly in this context, because the initial discriminator (often used in online IL) operates randomly and discordantly against the policy initialization, leading to misguided policy optimization and unlearning of pretraining knowledge. To overcome this challenge, we propose a principled offline-to-online IL method, named OLLIE, that simultaneously learns a near-expert policy initialization along with an aligned discriminator initialization, which can be seamlessly integrated into online IL, achieving smooth and fast finetuning. Empirically, OLLIE consistently and significantly outperforms the baseline methods in 20 challenging tasks, from continuous control to vision-based domains, in terms of performance, demonstration efficiency, and convergence speed. This work may serve as a foundation for further exploration of pretraining and finetuning in the context of IL.
APA
Yue, S., Hua, X., Ren, J., Lin, S., Zhang, J. & Zhang, Y.. (2024). OLLIE: Imitation Learning from Offline Pretraining to Online Finetuning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:57966-58018 Available from https://proceedings.mlr.press/v235/yue24a.html.

Related Material