PI-QT-Opt: Predictive Information Improves Multi-Task Robotic Reinforcement Learning at Scale

Kuang-Huei Lee, Ted Xiao, Adrian Li, Paul Wohlhart, Ian Fischer, Yao Lu
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1696-1707, 2023.

Abstract

The predictive information, the mutual information between the past and future, has been shown to be a useful representation learning auxiliary loss for training reinforcement learning agents, as the ability to model what will happen next is critical to success on many control tasks. While existing studies are largely restricted to training specialist agents on single-task settings in simulation, in this work, we study modeling the predictive information for robotic agents and its importance for general-purpose agents that are trained to master a large repertoire of diverse skills from large amounts of data. Specifically, we introduce Predictive Information QT-Opt (PI-QT-Opt), a QT-Opt agent augmented with an auxiliary loss that learns representations of the predictive information to solve up to 297 vision-based robot manipulation tasks in simulation and the real world with a single set of parameters. We demonstrate that modeling the predictive information significantly improves success rates on the training tasks and leads to better zero-shot transfer to unseen novel tasks. Finally, we evaluate PI-QT-Opt on real robots, achieving substantial and consistent improvement over QT-Opt in multiple experimental settings of varying environments, skills, and multi-task configurations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-lee23a, title = {PI-QT-Opt: Predictive Information Improves Multi-Task Robotic Reinforcement Learning at Scale}, author = {Lee, Kuang-Huei and Xiao, Ted and Li, Adrian and Wohlhart, Paul and Fischer, Ian and Lu, Yao}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1696--1707}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/lee23a/lee23a.pdf}, url = {https://proceedings.mlr.press/v205/lee23a.html}, abstract = {The predictive information, the mutual information between the past and future, has been shown to be a useful representation learning auxiliary loss for training reinforcement learning agents, as the ability to model what will happen next is critical to success on many control tasks. While existing studies are largely restricted to training specialist agents on single-task settings in simulation, in this work, we study modeling the predictive information for robotic agents and its importance for general-purpose agents that are trained to master a large repertoire of diverse skills from large amounts of data. Specifically, we introduce Predictive Information QT-Opt (PI-QT-Opt), a QT-Opt agent augmented with an auxiliary loss that learns representations of the predictive information to solve up to 297 vision-based robot manipulation tasks in simulation and the real world with a single set of parameters. We demonstrate that modeling the predictive information significantly improves success rates on the training tasks and leads to better zero-shot transfer to unseen novel tasks. Finally, we evaluate PI-QT-Opt on real robots, achieving substantial and consistent improvement over QT-Opt in multiple experimental settings of varying environments, skills, and multi-task configurations.} }
Endnote
%0 Conference Paper %T PI-QT-Opt: Predictive Information Improves Multi-Task Robotic Reinforcement Learning at Scale %A Kuang-Huei Lee %A Ted Xiao %A Adrian Li %A Paul Wohlhart %A Ian Fischer %A Yao Lu %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-lee23a %I PMLR %P 1696--1707 %U https://proceedings.mlr.press/v205/lee23a.html %V 205 %X The predictive information, the mutual information between the past and future, has been shown to be a useful representation learning auxiliary loss for training reinforcement learning agents, as the ability to model what will happen next is critical to success on many control tasks. While existing studies are largely restricted to training specialist agents on single-task settings in simulation, in this work, we study modeling the predictive information for robotic agents and its importance for general-purpose agents that are trained to master a large repertoire of diverse skills from large amounts of data. Specifically, we introduce Predictive Information QT-Opt (PI-QT-Opt), a QT-Opt agent augmented with an auxiliary loss that learns representations of the predictive information to solve up to 297 vision-based robot manipulation tasks in simulation and the real world with a single set of parameters. We demonstrate that modeling the predictive information significantly improves success rates on the training tasks and leads to better zero-shot transfer to unseen novel tasks. Finally, we evaluate PI-QT-Opt on real robots, achieving substantial and consistent improvement over QT-Opt in multiple experimental settings of varying environments, skills, and multi-task configurations.
APA
Lee, K., Xiao, T., Li, A., Wohlhart, P., Fischer, I. & Lu, Y.. (2023). PI-QT-Opt: Predictive Information Improves Multi-Task Robotic Reinforcement Learning at Scale. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1696-1707 Available from https://proceedings.mlr.press/v205/lee23a.html.

Related Material