MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning

Rafael Rafailov, Kyle Beltran Hatch, Victor Kolev, John D. Martin, Mariano Phielipp, Chelsea Finn
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3654-3671, 2023.

Abstract

We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations in the context of realistic robot tasks. Recent offline model-free approaches successfully use online fine-tuning to either improve the performance of the agent over the data collection policy or adapt to novel tasks. At the same time, model-based RL algorithms have achieved significant progress in sample efficiency and the complexity of the tasks they can solve, yet remain under-utilized in the fine-tuning setting. In this work, we argue that existing methods for high-dimensional model-based offline RL are not suitable for offline-to-online fine-tuning due to issues with distribution shifts, off-dynamics data, and non-stationary rewards. We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization, while preventing model exploitation by controlling epistemic uncertainty. We find that our approach successfully solves tasks from the MetaWorld benchmark, as well as the Franka Kitchen robot manipulation environment completely from images. To our knowledge, MOTO is the first and only method to solve this environment from pixels.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-rafailov23a, title = {MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning}, author = {Rafailov, Rafael and Hatch, Kyle Beltran and Kolev, Victor and Martin, John D. and Phielipp, Mariano and Finn, Chelsea}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3654--3671}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/rafailov23a/rafailov23a.pdf}, url = {https://proceedings.mlr.press/v229/rafailov23a.html}, abstract = {We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations in the context of realistic robot tasks. Recent offline model-free approaches successfully use online fine-tuning to either improve the performance of the agent over the data collection policy or adapt to novel tasks. At the same time, model-based RL algorithms have achieved significant progress in sample efficiency and the complexity of the tasks they can solve, yet remain under-utilized in the fine-tuning setting. In this work, we argue that existing methods for high-dimensional model-based offline RL are not suitable for offline-to-online fine-tuning due to issues with distribution shifts, off-dynamics data, and non-stationary rewards. We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization, while preventing model exploitation by controlling epistemic uncertainty. We find that our approach successfully solves tasks from the MetaWorld benchmark, as well as the Franka Kitchen robot manipulation environment completely from images. To our knowledge, MOTO is the first and only method to solve this environment from pixels.} }
Endnote
%0 Conference Paper %T MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning %A Rafael Rafailov %A Kyle Beltran Hatch %A Victor Kolev %A John D. Martin %A Mariano Phielipp %A Chelsea Finn %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-rafailov23a %I PMLR %P 3654--3671 %U https://proceedings.mlr.press/v229/rafailov23a.html %V 229 %X We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations in the context of realistic robot tasks. Recent offline model-free approaches successfully use online fine-tuning to either improve the performance of the agent over the data collection policy or adapt to novel tasks. At the same time, model-based RL algorithms have achieved significant progress in sample efficiency and the complexity of the tasks they can solve, yet remain under-utilized in the fine-tuning setting. In this work, we argue that existing methods for high-dimensional model-based offline RL are not suitable for offline-to-online fine-tuning due to issues with distribution shifts, off-dynamics data, and non-stationary rewards. We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization, while preventing model exploitation by controlling epistemic uncertainty. We find that our approach successfully solves tasks from the MetaWorld benchmark, as well as the Franka Kitchen robot manipulation environment completely from images. To our knowledge, MOTO is the first and only method to solve this environment from pixels.
APA
Rafailov, R., Hatch, K.B., Kolev, V., Martin, J.D., Phielipp, M. & Finn, C.. (2023). MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3654-3671 Available from https://proceedings.mlr.press/v229/rafailov23a.html.

Related Material