Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1702-1712, 2022.

Abstract

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address this issue, we first propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples from the offline dataset. Furthermore, we leverage multiple Q-functions trained pessimistically offline, thereby preventing overoptimism concerning unfamiliar actions at novel states during the initial training phase. We show that the proposed method improves sample-efficiency and final performance of the fine-tuned robotic agents on various locomotion and manipulation tasks. Our code is available at: https://github.com/shlee94/Off2OnRL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-lee22d, title = {Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble}, author = {Lee, Seunghyun and Seo, Younggyo and Lee, Kimin and Abbeel, Pieter and Shin, Jinwoo}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1702--1712}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/lee22d/lee22d.pdf}, url = {https://proceedings.mlr.press/v164/lee22d.html}, abstract = {Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address this issue, we first propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples from the offline dataset. Furthermore, we leverage multiple Q-functions trained pessimistically offline, thereby preventing overoptimism concerning unfamiliar actions at novel states during the initial training phase. We show that the proposed method improves sample-efficiency and final performance of the fine-tuned robotic agents on various locomotion and manipulation tasks. Our code is available at: https://github.com/shlee94/Off2OnRL.} }
Endnote
%0 Conference Paper %T Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble %A Seunghyun Lee %A Younggyo Seo %A Kimin Lee %A Pieter Abbeel %A Jinwoo Shin %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-lee22d %I PMLR %P 1702--1712 %U https://proceedings.mlr.press/v164/lee22d.html %V 164 %X Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address this issue, we first propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples from the offline dataset. Furthermore, we leverage multiple Q-functions trained pessimistically offline, thereby preventing overoptimism concerning unfamiliar actions at novel states during the initial training phase. We show that the proposed method improves sample-efficiency and final performance of the fine-tuned robotic agents on various locomotion and manipulation tasks. Our code is available at: https://github.com/shlee94/Off2OnRL.
APA
Lee, S., Seo, Y., Lee, K., Abbeel, P. & Shin, J.. (2022). Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1702-1712 Available from https://proceedings.mlr.press/v164/lee22d.html.

Related Material